SSA Flashcards

1
Q

A hospital has a mission-critical application that uses a RESTful API powered by Amazon API Gateway and AWS Lambda. The medical officers upload PDF reports to the system which are then stored as static media content in an Amazon S3 bucket.

The security team wants to improve its visibility when it comes to cyber-attacks and ensure HIPAA (Health Insurance Portability and Accountability Act) compliance. The company is searching for a solution that continuously monitors object-level S3 API operations and identifies protected health information (PHI) in the reports, with minimal changes in their existing Lambda function.

Which of the following solutions will meet these requirements with the LEAST operational overhead?

Use Amazon Textract Medical with PII redaction turned on to extract and filter sensitive text from the PDF reports. Create a new Lambda function that calls the regular Amazon Comprehend API to identify the PHI from the extracted text.

Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text.

Use Amazon Transcribe to read and analyze the PDF reports using the StartTranscriptionJob API operation.

Use Amazon SageMaker Ground Truth to label and detect protected health information (PHI) content with low-confidence predictions.

Use Amazon Rekognition to extract the text data from the PDF reports. Integrate the Amazon Comprehend Medical service with the existing Lambda functions to identify the PHI from the extracted text.

A

Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company has a web-based order processing system that is currently using a standard queue in Amazon SQS. The IT Manager noticed that there are a lot of cases where an order was processed twice. This issue has caused a lot of trouble in processing and made the customers very unhappy. The manager has asked you to ensure that this issue will not recur.

What can you do to prevent this from happening again in the future? (Select TWO.)

Change the message size in SQS.
Alter the visibility timeout of SQS.
Alter the retention period in Amazon SQS.
Replace Amazon SQS and instead, use Amazon Simple Workflow service.
Use an Amazon SQS FIFO Queue instead.

A

Replace Amazon SQS and instead, use Amazon Simple Workflow service.

Use an Amazon SQS FIFO Queue instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company launched an EC2 instance in the newly created VPC. They noticed that the generated instance does not have an associated DNS hostname.

Which of the following options could be a valid reason for this issue?

The newly created VPC has an invalid CIDR block.
Amazon Route 53 is not enabled.
The DNS resolution and DNS hostname of the VPC configuration should be enabled.
The security group of the EC2 instance needs to be modified.

A

The DNS resolution and DNS hostname of the VPC configuration should be enabled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To save costs, your manager instructed you to analyze and review the setup of your AWS cloud infrastructure. You should also provide an estimate of how much your company will pay for all of the AWS resources that they are using.

In this scenario, which of the following will incur costs? (Select TWO.)

A running EC2 Instance
A stopped On-Demand EC2 Instance
Public Data Set
Using an Amazon VPC
EBS Volumes attached to stopped EC2 Instances

A

A running EC2 Instance
EBS Volumes attached to stopped EC2 Instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A tech company currently has an on-premises infrastructure. They are currently running low on storage and want to have the ability to extend their storage using the AWS cloud.

Which AWS service can help them achieve this requirement?

Amazon Storage Gateway
Amazon EC2
Amazon SQS
Amazon Elastic Block Storage

A

Amazon Storage Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Amazon Storage Gateway?

A

AWS storage gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between on-premises ENV and AWS storage infra

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company has a set of Linux servers running on multiple On-Demand EC2 Instances. The Audit team wants to collect and process the application log files generated from these servers for their report.

Which of the following services is best to use in this case?

A single On-Demand Amazon EC2 instance for both storing and processing the log files

Amazon S3 Glacier for storing the application log files and Spot EC2 Instances for processing them.

Amazon S3 Glacier Deep Archive for storing the application log files and AWS ParallelCluster for processing the log files.

Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files.

A

Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company is using an Auto Scaling group which is configured to launch new t2.micro EC2 instances when there is a significant load increase in the application. To cope with the demand, you now need to replace those instances with a larger t2.2xlarge instance type.

How would you implement this change?

Change the instance type of each EC2 instance manually.

Create a new version of the launch template with the new instance type and update the Auto Scaling Group.
Create another Auto Scaling Group and attach the new instance type.

Just change the instance type to t2.2xlarge in the current launch template.

A

Create a new version of the launch template with the new instance type and update the Auto Scaling Group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A media company needs to configure an Amazon S3 bucket to serve static assets for the public-facing web application. Which methods ensure that all of the objects uploaded to the S3 bucket can be read publicly all over the Internet? (Select TWO.)

Create an IAM role to set the objects inside the S3 bucket to public read.

Configure the S3 bucket policy to set all objects to public read.
Configure the cross-origin resource sharing (CORS) of the S3 bucket to allow objects to be publicly accessible from all domains.

Do nothing. Amazon S3 objects are already public by default.

Grant public read access to the object when uploading it using the S3 Console.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A media company needs to configure an Amazon S3 bucket to serve static assets for the public-facing web application. Which methods ensure that all of the objects uploaded to the S3 bucket can be read publicly all over the Internet? (Select TWO.)

Create an IAM role to set the objects inside the S3 bucket to public read.

Configure the S3 bucket policy to set all objects to public read.

Configure the cross-origin resource sharing (CORS) of the S3 bucket to allow objects to be publicly accessible from all domains.

Do nothing. Amazon S3 objects are already public by default.

Grant public read access to the object when uploading it using the S3 Console.

A

Configure the S3 bucket policy to set all objects to public read.

Grant public read access to the object when uploading it using the S3 Console.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has hundreds of VPCs with multiple VPN connections to their data centers spanning 5 AWS Regions. As the number of its workloads grows, the company must be able to scale its networks across multiple accounts and VPCs to keep up. A Solutions Architect is tasked to interconnect all of the company’s on-premises networks, VPNs, and VPCs into a single gateway, which includes support for inter-region peering across multiple AWS regions.

Which of the following is the BEST solution that the architect should set up to support the required interconnectivity?

Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection.

Set up an AWS Direct Connect Gateway to achieve inter-region VPC access to all of the AWS resources and on-premises data centers. Set up a link aggregation group (LAG) to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Launch a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.
Set up an AWS VPN CloudHub for inter-region VPC access and a Direct Connect gateway for the VPN connections to the on-premises data centers. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.

Enable inter-region VPC peering that allows peering relationships to be established between multiple VPCs across different AWS regions. Set up a networking configuration that ensures that the traffic will always stay on the global AWS backbone and never traverse the public Internet.

A

Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A leading IT consulting company has an application which processes a large stream of financial data by an Amazon ECS Cluster then stores the result to a DynamoDB table. You have to design a solution to detect new entries in the DynamoDB table then automatically trigger a Lambda function to run some tests to verify the processed data.

What solution can be easily implemented to alert the Lambda function of new entries while requiring minimal configuration change to your architecture?

Invoke the Lambda functions using SNS each time that the ECS Cluster successfully processed financial data.
Use Systems Manager Automation to detect new entries in the DynamoDB table then automatically invoke the Lambda function for processing.
Use CloudWatch Alarms to trigger the Lambda function whenever a new entry is created in the DynamoDB table.
Enable DynamoDB Streams to capture table activity and automatically trigger the Lambda function.

A

Enable DynamoDB streams to capture table activity and automatically trigger the lambda function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is using an On-Demand EC2 instance to host a legacy web application that uses an Amazon Instance Store-Backed AMI. The web application should be decommissioned as soon as possible and hence, you need to terminate the EC2 instance.

When the instance is terminated, what happens to the data on the root volume?

Data is automatically saved as an EBS snapshot.

Data is automatically saved as an EBS volume.

Data is automatically deleted.

Data is unavailable until the instance is restarted.

A

Data is automatically deleted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company conducts performance testing on a t3.large MySQL RDS DB instance twice a week. They use Performance Insights to analyze and fine-tune expensive queries. The company needs to reduce its operational expense in running the tests without compromising the tests’ integrity.

Which of the following is the most cost-effective solution?

Once the testing is completed, take a snapshot of the database and terminate it. Restore the database from the snapshot when necessary.

Stop the database once the test is done and restart it only when necessary.

Perform a mysqldump to get a copy of the database on a local machine. Use MySQL Workbench to analyze the queries.

Downgrade the database instance to t3.small.

A

Once the testing is completed, take a snapshot of the database and terminate it. Restore the database from the snapshot when necessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A popular augmented reality (AR) mobile game is heavily using a RESTful API which is hosted in AWS. The API uses Amazon API Gateway and a DynamoDB table with a preconfigured read and write capacity. Based on your systems monitoring, the DynamoDB table begins to throttle requests during high peak loads which causes the slow performance of the game.

Which of the following can you do to improve the performance of your app?

Create an SQS queue in front of the DynamoDB table.

Integrate an Application Load Balancer with your DynamoDB table.

Add the DynamoDB table to an Auto Scaling Group.

Use DynamoDB Auto Scaling

A

Use DynamoDB auto scaling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company decided to change its third-party data analytics tool to a cheaper solution. They sent a full data export on a CSV file which contains all of their analytics information. You then save the CSV file to an S3 bucket for storage. Your manager asked you to do some validation on the provided data export.

In this scenario, what is the most cost-effective and easiest way to analyze export data using standard SQL?

Create a migration tool to load the CSV export file from S3 to a DynamoDB instance. Once the data has been loaded, run queries using DynamoDB.

Use mysqldump client utility to load the CSV export file from S3 to a MySQL RDS instance. Run some SQL queries once the data has been loaded to complete your validation.

To be able to run SQL queries, use Amazon Athena to analyze the export data file in S3.

Use a migration tool to load the CSV export file from S3 to a database that is designed for online analytic processing (OLAP) such as AWS RedShift. Run some queries once the data has been loaded to complete your validation.

A

To be able to run SQL queries, use Amazon Athena to analyze the export data file in S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company has a global news website hosted in a fleet of EC2 Instances. Lately, the load on the website has increased which resulted in slower response time for the site visitors. This issue impacts the revenue of the company as some readers tend to leave the site if it does not load after 10 seconds.

Which of the below services in AWS can be used to solve this problem? (Select TWO.)

Use Amazon CloudFront with website as the custom origin.

For better read throughput, use AWS Storage Gateway to distribute the content across multiple regions.

Use Amazon ElastiCache for the website’s in-memory data store or cache.

Deploy the website to all regions in different VPCs for faster processing.

A

Use Amazon CloudFront with website as the custom origin.

Use Amazon ElastiCache for the website’s in-memory data store or cache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company needs to integrate the Lightweight Directory Access Protocol (LDAP) directory service from the on-premises data center to the AWS VPC using IAM. The identity store which is currently being used is not compatible with SAML.

Which of the following provides the most valid approach to implement the integration?

Develop an on-premises custom identity broker application and use STS to issue short-lived AWS credentials.

Use AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP.

Use an IAM policy that references the LDAP identifiers and AWS credentials.

Use IAM roles to rotate the IAM credentials whenever LDAP credentials are updated.

A

Develop an on-premises custom identity broker application and use STS to issue short-lived AWS credentials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A startup is planning to set up and govern a secure, compliant, multi-account AWS environment in preparation for its upcoming projects. The IT Manager requires the solution to have a dashboard for continuous detection of policy non-conformance and non-compliant resources across the enterprise, as well as to comply with the AWS multi-account strategy best practices.

Which of the following offers the easiest way to fulfill this task?

Use AWS Organizations to build a landing zone to automatically provision new AWS accounts. Utilize the AWS Personal Health Dashboard to see provisioned accounts across your enterprise. Enable preventive and detective guardrails enabled for policy enforcement.

Launch new AWS member accounts using the AWS CloudFormation StackSets. Use AWS Config to continuously track the configuration changes and set rules to monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view

Use AWS Service Catalog to launch new AWS member accounts. Configure AWS Service Catalog Launch Constraints to continuously track configuration changes and monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view

Use AWS Control Tower to launch a landing zone to automatically provision and configure new accounts through an Account Factory. Utilize the AWS Control Tower dashboard to monitor provisioned accounts across your enterprise. Set up preventive and detective guardrails for policy enforcement.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An organization plans to use an AWS Direct Connect connection to establish a dedicated connection between its on-premises network and AWS. The organization needs to launch a fully managed solution that will automate and accelerate the replication of data to and from various AWS storage services.

Which of the following solutions would you recommend?

Use an AWS Storage Gateway tape gateway to store data on virtual tape cartridges and asynchronously copy your backups to AWS.
Use an AWS DataSync agent to rapidly move the data over the Internet.
Use an AWS DataSync agent to rapidly move the data over a service endpoint.
Use an AWS Storage Gateway file gateway to store and retrieve files directly using the SMB file system protocol.

A

Use an AWS DataSync agent to rapidly move the data over a service endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is AWS DataSync?

A

Automate and accelerate the replication of data between your on-premises storage systems and AWS storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A multinational bank is storing its confidential files in an S3 bucket. The security team recently performed an audit, and the report shows that multiple files have been uploaded without 256-bit Advanced Encryption Standard (AES) server-side encryption. For added protection, the encryption key must be automatically rotated every year. The solutions architect must ensure that there would be no other unencrypted files uploaded in the S3 bucket in the future.

Which of the following will meet these requirements with the LEAST operational overhead?

Create an S3 bucket policy that denies permissions to upload an object unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built-in key rotation feature of the SSE-S3 encryption keys.

Create a new customer-managed key (CMK) from the AWS Key Management Service (AWS KMS). Configure the default encryption behavior of the bucket to use the customer-managed key. Manually rotate the CMK each and every year.

Create an S3 bucket policy for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”:”aws:kms” header. Enable the S3 Object Lock in compliance mode for all objects to automatically rotate the built-in AES256 customer-managed key of the bucket.

Create a Service Control Policy (SCP) for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and modify the built-in key rotation feature of the SSE-S3 encryption keys to rotate the key yearly.

A

Create an S3 bucket policy that denies permissions to upload an object unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built-in key rotation feature of the SSE-S3 encryption keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A company launched a global news website that is deployed to AWS and is using MySQL RDS. The website has millions of viewers from all over the world, which means that the website has a read-heavy database workload. All database transactions must be ACID compliant to ensure data integrity.

In this scenario, which of the following is the best option to use to increase the read-throughput on the MySQL database?

Use SQS to queue up the requests
Enable Multi-AZ deployments
Enable Amazon RDS Standby Replicas
Enable Amazon RDS Read Replicas

A

Enable RDS Read Replicas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A food company bought 50 licenses of Windows Server to be used by the developers when launching Amazon EC2 instances to deploy and test applications. The developers are free to provision EC2 instances as long as there is a license available. The licenses are tied to the total CPU count of each virtual machine. The company wants to ensure that developers won’t be able to launch new instances once the licenses are exhausted. The company wants to receive notifications when all licenses are in use.

Which of the following options is the recommended solution to meet the company’s requirements?

Configure AWS Resource Access Manager (AWS RAM) to track and control the licenses used by AWS resources. Configure AWS RAM to provide available licenses for Amazon EC2 instances. Set up an Amazon SNS to send notifications and alerts once all licenses are used.
Upload the licenses on AWS Systems Manager Fleet Manager to be encrypted and distributed to Amazon EC2 instances. Attach an IAM role on the EC2 instances to request a license from the Fleet Manager. Set up an Amazon SNS to send notifications and alerts once all licenses are used

Define license configuration rules on AWS Certificate Manager to track and control license usage. Enable the option to “Enforce certificate limit” to prevent going over the number of allocated licenses. Add an Amazon SQS queue with ChangeVisibility Timeout configured to send notifications and alerts.

Define licensing rules on AWS License Manager to track and control license usage. Enable the option to “Enforce license limit” to prevent going over the number of allocated licenses. Add an Amazon SNS topic to send notifications and alerts.

A

Define licensing rules on AWS License Manager to track and control license usage. Enable the option to “Enforce license limit” to prevent going over the number of allocated licenses. Add an Amazon SNS topic to send notifications and alerts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A company is looking to store their confidential financial files in AWS which are accessed every week. The Architect was instructed to set up the storage system which uses envelope encryption and automates key rotation. It should also provide an audit trail that shows who used the encryption key and by whom for security purposes.

Which combination of actions should the Architect implement to satisfy the requirement in the most cost-effective way? (Select TWO.)

Configure Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).
Configure Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).
Configure Server-Side Encryption with Customer-Provided Keys (SSE-C).
Use Amazon S3 Glacier Deep Archive to store the data.
Use Amazon S3 to store the data.

A

Configure Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).

Use Amazon S3 to store the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

There is a new compliance rule in your company that audits every Windows and Linux EC2 instances each month to view any performance issues. They have more than a hundred EC2 instances running in production, and each must have a logging function that collects various system details regarding that instance. The SysOps team will periodically review these logs and analyze their contents using AWS Analytics tools, and the result will need to be retained in an S3 bucket.

In this scenario, what is the most efficient way to collect and analyze logs from the instances with minimal effort?

Install AWS Inspector Agent in each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
Install the AWS Systems Manager Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Install AWS SDK in each instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.

A

Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A media company is using Amazon EC2, ELB, and S3 for its video-sharing portal for filmmakers. They are using a standard S3 storage class to store all high-quality videos that are frequently accessed only during the first three months of posting.

As a Solutions Architect, what should you do if the company needs to automatically transfer or archive media data from an S3 bucket to Glacier?

Use a custom shell script that transfers data from the S3 bucket to Glacier
Use Amazon SWF
Use Amazon SQS
Use Lifecycle Policies

A

Use Lifecycle Policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are AppSync pipeline resolvers

A

AppSync pipeline resolvers offer an elegant server-side solution to address the common challenge faced in web applications—aggregating data from multiple database tables. Instead of invoking multiple API calls across different data sources, which can degrade application performance and user experience, AppSync pipeline resolvers enable easy retrieval of data from multiple sources with just a single call. By leveraging Pipeline functions, these resolvers streamline the process of consolidating and presenting data to end-users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

AWS Run Command

A

AWS Systems Manager Run command lets you remotely and securely manage the configuration of your managed instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A company plans to use Route 53 instead of an ELB to load balance the incoming request to the web application. The system is deployed to two EC2 instances to which the traffic needs to be distributed. You want to set a specific percentage of traffic to go to each instance.

Which routing policy would you use?

Weighted
Failover
Latency
Geolocation

A

Weighted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An organization plans to run an application in a dedicated physical server that doesn’t use virtualization. The application data will be stored in a storage solution that uses an NFS protocol. To prevent data loss, you need to use a durable cloud storage service to store a copy of your data.

Which of the following is the most suitable solution to meet the requirement?

Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data.

Use an AWS Storage Gateway hardware appliance for your compute resources. Configure Volume Gateway to store the application data and backup data.

Use AWS Storage Gateway with a gateway VM appliance for your compute resources. Configure File Gateway to store the application data and backup data.

Use an AWS Storage Gateway hardware appliance for your compute resources. Configure Volume Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data.

A

Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A company is running a batch job on an EC2 instance inside a private subnet. The instance gathers input data from an S3 bucket in the same region through a NAT Gateway. The company is looking for a solution that will reduce costs without imposing risks on redundancy or availability.

Which solution will accomplish this?

Re-assign the NAT Gateway to a lower EC2 instance type.

Deploy a Transit Gateway to peer connection between the instance and the S3 bucket.

Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance.

Replace the NAT Gateway with a NAT instance hosted on a burstable instance type.

A

Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A social media company needs to capture the detailed information of all HTTP requests that went through their public-facing Application Load Balancer every five minutes. The client’s IP address and network latencies must also be tracked. They want to use this data for analyzing traffic patterns and for troubleshooting their Docker applications orchestrated by the Amazon ECS Anywhere service.

Which of the following options meets the customer requirements with the LEAST amount of overhead?

Install and run the AWS X-Ray daemon on the Amazon ECS cluster. Use the Amazon CloudWatch ServiceLens to analyze the traffic that goes through the application.

Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting.

Integrate Amazon EventBridge (Amazon CloudWatch Events) metrics on the Application Load Balancer to capture the client IP address. Use Amazon CloudWatch Container Insights to analyze traffic patterns.

Enable AWS CloudTrail for their Application Load Balancer. Use the AWS CloudTrail Lake to analyze and troubleshoot the application traffic.

A

Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

An e-commerce company’s Chief Information Security Officer (CISO) has taken necessary measures to ensure that sensitive customer data is secure in the cloud. However, the company recently discovered that some customer Personally Identifiable Information (PII) was mistakenly uploaded to an S3 bucket.

The company aims to rectify this mistake and prevent any similar incidents from happening again in the future. Additionally, the company would like to be notified if this error occurs again.

As the Solutions Architect, which combination of options should you implement in this scenario? (Select TWO.)

Identify sensitive data using Amazon Macie and create an Amazon EventBridge (Amazon CloudWatch Events) rule to capture the SensitiveData event type.

Set up an Amazon SNS topic as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the error occurs again.

Identify sensitive data using Amazon GuardDuty by creating an Amazon EventBridge (Amazon CloudWatch Events) rule to include the CRITICAL event types from GuardDuty findings.

Set up an Amazon SQS as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the error occurs again.

Set up an AWS IoT Message Broker as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the SensitiveData:S3Object/Personal event occurs again.

A

Identify sensitive data using Amazon Macie and create an Amazon EventBridge (Amazon CloudWatch Events) rule to capture the SensitiveData event type.

Set up an Amazon SNS topic as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the error occurs again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A company has a web application hosted on a fleet of EC2 instances located in two Availability Zones that are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is highly-available.

Which health checks will you implement?

ICMP health check
FTP health check
HTTP or HTTPS health check
TCP health check

A

HTTP or HTTPS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A Solutions Architect is migrating several Windows-based applications to AWS that require a scalable file system storage for high-performance computing (HPC). The storage service must have full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS).

Which of the following is the MOST suitable storage service that the Architect should use to fulfill this scenario?

Amazon FSx for Lustre
Amazon FSx for Windows File Server
Amazon S3 Glacier Deep Archive
AWS DataSync

A

Amazon FSx for Windows File Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A company has a web application hosted in their on-premises infrastructure that they want to migrate to AWS cloud. Your manager has instructed you to ensure that there is no downtime while the migration process is on-going. In order to achieve this, your team decided to divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure. Once the migration is over and the application works with no issues, a full diversion to AWS will be implemented. The company’s VPC is connected to its on-premises network via an AWS Direct Connect connection.

Which of the following are the possible solutions that you can implement to satisfy the above requirement? (Select TWO.)

Use a Network Load balancer with Weighted Target Groups to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.

Use Route 53 with Weighted routing policy to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.

Use an Application Elastic Load balancer with Weighted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.

Use Route 53 with Failover routing policy to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.

Use AWS Global Accelerator to divert and proportion the HTTP and HTTPS traffic between the on-premises and AWS-hosted application. Ensure that the on-premises network has an AnyCast static IP address and is connected to your VPC via a Direct Connect Gateway.

A

Use Route 53 with Weighted routing policy to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.

Use an Application Elastic Load balancer with Weighted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A leading media company has recently adopted a hybrid cloud architecture which requires them to migrate their application servers and databases in AWS. One of their applications requires a heterogeneous database migration in which you need to transform your on-premises Oracle database to PostgreSQL in AWS. This entails a schema and code transformation before the proper data migration starts.

Which of the following options is the most suitable approach to migrate the database in AWS?

Configure a Launch Template that automatically converts the source schema and code to match that of the target database. Then, use the AWS Database Migration Service to migrate data from the source database to the target database.

First, use the AWS Schema Conversion Tool to convert the source schema and application code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.

Use Amazon Neptune to convert the source schema and code to match that of the target database in RDS. Use the AWS Batch to effectively migrate the data from the source database to the target database in a batch process.

Heterogeneous database migration is not supported in AWS. You have to transform your database first to PostgreSQL and then migrate it to RDS.

A

First, use the AWS Schema Conversion Tool to convert the source schema and application code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

An application is hosted in an On-Demand EC2 instance and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many others. As part of the upcoming IT audit, you need to ensure that all API calls to your AWS resources are logged and durably stored.

Which is the most suitable service that you should use to meet this requirement?

AWS X-Ray
Amazon CloudWatch
Amazon API Gateway
AWS CloudTrail

A

AWS Cloud Trail

Records AWS Management Console actions and API Calls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A financial company wants to store their data in Amazon S3 but at the same time, they want to store their frequently accessed data locally on their on-premises server. This is due to the fact that they do not have the option to extend their on-premises storage, which is why they are looking for a durable and scalable storage service to use in AWS.

What is the best solution for this scenario?

Use a fleet of EC2 instance with EBS volumes to store the commonly used data.

Use both Elasticache and S3 for frequently accessed data.

Use Amazon Glacier.

Use the Amazon Storage Gateway – Cached Volumes.

A

Use the Amazon Storage Gateway – Cached Volumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A company needs to accelerate the performance of its AI-powered medical diagnostic application by running its machine learning workloads on the edge of telecommunication carriers’ 5G networks. The application must be deployed to a Kubernetes cluster and have role-based access control (RBAC) access to IAM users and roles for cluster authentication.

Which of the following should the Solutions Architect implement to ensure single-digit millisecond latency for the application?

Host the application to an Amazon EKS cluster and run the Kubernetes pods on AWS Fargate. Create node groups in AWS Wavelength Zones for the Amazon EKS cluster. Add the EKS pod execution IAM role (AmazonEKSFargatePodExecutionRole) to your cluster and ensure that the Fargate profile has the same IAM role as your Amazon EC2 node groups.

Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create VPC endpoints for the AWS Wavelength Zones and apply them to the Amazon EKS cluster. Install the AWS IAM Authenticator for Kubernetes (aws-iam-authenticator) to your cluster.

Host the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Set up node groups in AWS Wavelength Zones for the Amazon EKS cluster. Attach the Amazon EKS connector agent role (AmazonECSConnectorAgentRole) to your cluster and use AWS Control Tower for RBAC access.

Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster.

A

Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

An e-commerce application is using a fanout messaging pattern for its order management system. For every order, it sends an Amazon SNS message to an SNS topic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronous processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the message. There was an incident that while an EC2 instance is currently processing a message, the instance was abruptly terminated, and the processing was not completed in time.

In this scenario, what happens to the SQS message?

When the message visibility timeout expires, the message becomes available for processing by other EC2 instances
The message will be sent to a Dead Letter Queue in AWS DataSync.
The message will automatically be assigned to the same EC2 instance when it comes back online within or after the visibility timeout.
The message is deleted and becomes duplicated in the SQS when the EC2 instance comes online.

A

When the message visibility timeout expires, the message becomes available for processing by other EC2 instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A company has an On-Demand EC2 instance with an attached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnight at 12 AM when the instance is not used. One night, there has been a production incident where you need to perform a change on both the instance and on the EBS volume at the same time when the snapshot is currently taking place.

Which of the following scenario is true when it comes to the usage of an EBS volume while the snapshot is in progress?

The EBS volume can be used in read-only mode while the snapshot is in progress.

The EBS volume cannot be used until the snapshot completes.

The EBS volume can be used while the snapshot is in progress.

The EBS volume cannot be detached or attached to an EC2 instance until the snapshot completes

A

The EBS volume can be used while the snapshot is in progress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs.

Which of the following is the most cost-effective option to use in implementing this architecture?

Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs.

Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.

Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs.

Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.

A

Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

An On-Demand EC2 instance is launched into a VPC subnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group has an inbound rule to allow SSH from any IP address and does not have any outbound rules.

In this scenario, what are the changes needed to allow SSH connection to the instance?

The outbound security group needs to be modified to allow outbound traffic.
The network ACL needs to be modified to allow outbound traffic.
No action needed. It can already be accessed from any IP address using SSH.
Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.

A

The network ACL needs to be modified to allow outbound traffic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

A Solutions Architect is working for a multinational telecommunications company. The IT Manager wants to consolidate their log streams including the access, application, and security logs in one single system. Once consolidated, the company will analyze these logs in real-time based on heuristics. There will be some time in the future where the company will need to validate heuristics, which requires going back to data samples extracted from the last 12 hours.

What is the best approach to meet this requirement?

First, configure Amazon Cloud Trail to receive custom logs and then use EMR to apply heuristics on the logs.

First, send all the log events to Amazon SQS then set up an Auto Scaling group of EC2 servers to consume the logs and finally, apply the heuristics.

First, send all of the log events to Amazon Kinesis then afterwards, develop a client process to apply heuristics on the logs.

First, set up an Auto Scaling group of EC2 servers then store the logs on Amazon S3 then finally, use EMR to apply heuristics on the logs

A

First, send all of the log events to Amazon Kinesis then afterwards, develop a client process to apply heuristics on the logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

A data analytics startup is collecting clickstream data and stores them in an S3 bucket. You need to launch an AWS Lambda function to trigger the ETL jobs to run as soon as new data becomes available in Amazon S3.

Which of the following services can you use as an extract, transform, and load (ETL) service in this scenario?

Redshift Spectrum
AWS Glue
AWS Step Functions
S3 Select

A

AWS Glue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A company needs to use Amazon S3 to store irreproducible financial documents. For their quarterly reporting, the files are required to be retrieved after a period of 3 months. There will be some occasions when a surprise audit will be held, which requires access to the archived data that they need to present immediately.

What will you do to satisfy this requirement in a cost-effective way?

Use Amazon S3 Standard
Use Amazon Glacier Deep Archive
Use Amazon S3 Standard – Infrequent Access
Use Amazon S3 -Intelligent Tiering

A

Use Amazon S3 Standard – Infrequent Access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

A company has multiple AWS Site-to-Site VPN connections placed between their VPCs and their remote network. During peak hours, many employees are experiencing slow connectivity issues, which limits their productivity. The company has asked a solutions architect to scale the throughput of the VPN connections.

Which solution should the architect carry out?

Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach additional VPN tunnels.

Modify the VPN configuration by increasing the number of tunnels to scale the throughput.

Add more virtual private gateways to a VPC and enable Equal Cost Multipath Routing (ECMR) to get higher VPN bandwidth.

Re-route some of the VPN connections to a secondary customer gateway device on the remote network’s end.

A

Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach additional VPN tunnels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

A company has a web-based ticketing service that utilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS queue are configured to poll the queue as often as possible to keep end-to-end throughput as high as possible. The Solutions Architect noticed that polling the queue in tight loops is using unnecessary CPU cycles, resulting in increased operational costs due to empty responses.

In this scenario, what should the Solutions Architect do to make the system more cost-effective?

Configure Amazon SQS to use short polling by setting the ReceiveMessageWaitTimeSeconds to zero.

Configure Amazon SQS to use short polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero.

Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero.

Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to zero.

A

Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

A local bank has an in-house application that handles sensitive financial data in a private subnet. After the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other services.

How should you design this solution so that the data does not pass through the public Internet?

Create an Internet gateway in the public subnet with a corresponding route entry that directs the data to S3.
Configure a Transit gateway along with a corresponding route entry that directs the data to S3.
Provision a NAT gateway in the private subnet with a corresponding route entry that directs the data to S3.
Configure a VPC Endpoint along with a corresponding route entry that directs the data to S3.

A

Configure a VPC Endpoint along with a corresponding route entry that directs the data to S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

A business plans to deploy an application on EC2 instances within an Amazon VPC and is considering adopting a Network Load Balancer to distribute incoming traffic among the instances. A solutions architect needs to suggest a solution that will enable the security team to inspect traffic entering and exiting their VPC.

Which approach satisfies the requirements?

Use the Network Access Analyzer service on the application’s VPC for inspecting ingress and egress traffic. Create a new Network Access Scope to filter and analyze all incoming and outgoing requests.
Enable Traffic Mirroring on the Network Load Balancer and forward traffic to the instances. Create a traffic mirror filter to inspect the ingress and egress of data that traverses your Amazon VPC.
Create a firewall using the AWS Network Firewall service at the VPC level then add custom rule groups for inspecting ingress and egress traffic. Update the necessary VPC route tables.
Create a firewall at the subnet level using the Amazon Detective service. Inspect the ingress and egress traffic using the VPC Reachability Analyzer.

A

Create a firewall using the AWS Network Firewall service at the VPC level then add custom rule groups for inspecting ingress and egress traffic. Update the necessary VPC route tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

A company needs to accelerate the development of its GraphQL APIs for its new customer service portal. The solution must be serverless to lower the monthly operating cost of the business. Their GraphQL APIs must be accessible via HTTPS and have a custom domain.

What solution should the Solutions Architect implement to meet the above requirements?

Deploy the GraphQL APIs as Kubernetes pods to AWS Fargate and AWS Outposts using Amazon EKS Anywhere for deployment. Create a custom domain using Amazon CloudFront and enable the Origin Shield feature to allow HTTPS communication to the GraphQL APIs.

Develop the application using the AWS AppSync service and use its built-in custom domain feature. Associate an SSL certificate to the AWS AppSync API using the AWS Certificate Manager (ACM) service to enable HTTPS communication.

Launch an AWS Elastic Beanstalk environment and use Amazon Route 53 for the custom domain. Configure Domain Name System Security Extensions (DNSSEC) in the Route 53 hosted zone to enable HTTPS communication.

Host the application in the VMware Cloud on AWS service. Associate a custom domain to the GraphSQL APIs via the AWS Directory Service for Microsoft Active Directory and provide multiple domain controllers to enable HTTPS communication.

A
51
Q

An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes.

Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)

Only the data in the volume is encrypted and not all the data moving between the volume and the instance.

The volumes created from the encrypted snapshot are not encrypted.

Snapshots are not automatically encrypted.

Snapshots are automatically encrypted.

All data moving between the volume and the instance are encrypted.

A

Snapshots are automatically encrypted.

All data moving between the volume and the instance are encrypted.

52
Q

A startup needs to use a shared file system for its .NET web application running on an Amazon EC2 Windows instance. The file system must provide a high level of throughput and IOPS that can also be integrated with Microsoft Active Directory.

Which is the MOST suitable service that you should use to achieve this requirement?

AWS Storage Gateway – File Gateway
Amazon EBS Provisioned IOPS SSD volumes
Amazon FSx for Windows File Server
Amazon Elastic File System

A

Amazon FSx for Windows File Server

53
Q

A wellness company is currently working on a wearable device that monitors key health metrics such as heart rate, sleep, and steps per day. The device is designed to send data to an Amazon S3 bucket for storage and analysis. On a daily basis, the device produces 1 MB of data. In order to quickly process and summarize this data, the company requires 512 MB of memory and must complete the task within a maximum of 10 seconds.

Which solution can fulfill these requirements in the MOST cost-effective manner?

Store the data in Amazon Redshift and process it with AWS Lambda.

Create an AWS Glue PySpark job to process the data.

Use AWS Lambda with a Python library for processing.

Use Amazon Kinesis Data Firehose to send the data from the device to Amazon S3.
Process the data on an EC2 instance with at least 512 MB of memory.

A

Use AWS Lambda with a Python library for processing.

54
Q

A healthcare company stores sensitive patient health records in their on-premises storage systems. These records must be kept indefinitely and protected from any type of modifications once they are stored. Compliance regulations mandate that the records must have granular access control and each data access must be audited at all levels. Currently, there are millions of obsolete records that are not accessed by their web application, and their on-premises storage is quickly running out of space. The Solutions Architect must design a solution to immediately move existing records to AWS and support the ever-growing number of new health records.

Which of the following is the most suitable solution that the Solutions Architect should implement to meet the above requirements?

Set up AWS Storage Gateway to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Events and Amazon S3 Object Lock in the bucket.

Set up AWS DataSync to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Data Events and Amazon S3 Object Lock in the bucket.

Set up AWS DataSync to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Events and Amazon S3 Object Lock in the bucket.

Set up AWS Storage Gateway to move the existing health records from the on-premises network to the AWS Cloud. Launch an Amazon EBS-backed EC2 instance to store both the existing and new records. Enable Amazon S3 server access logging and S3 Object Lock in the bucket.

A

Set up AWS DataSync to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Data Events and Amazon S3 Object Lock in the bucket.

55
Q

An investment bank has a distributed batch processing application which is hosted in an Auto Scaling group of Spot EC2 instances with an SQS queue. You configured your components to use client-side buffering so that the calls made from the client will be buffered first and then sent as a batch request to SQS.

What is a period of time during which the SQS queue prevents other consuming components from receiving and processing a message?

Visibility Timeout
Processing Timeout
Receiving Timeout
Component Timeout

A

Visibility Timeout

56
Q

A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application.

Which type of database service should the Architect use?

Amazon RDS
Amazon Aurora
Amazon Redshift
Amazon DynamoDB

A

Amazon Aurora

57
Q

A company has an application hosted in an Amazon ECS Cluster behind an Application Load Balancer. The Solutions Architect is building a sophisticated web filtering solution that allows or blocks web requests based on the country that the requests originate from. However, the solution should still allow specific IP addresses from that country.

Which combination of steps should the Architect implement to satisfy this requirement? (Select TWO.)

Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that originate from a specific country.

In the Application Load Balancer, create a listener rule that explicitly allows requests from approved IP addresses.

Set up a geo match condition in the Application Load Balancer that blocks requests from a specific country.

Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP Set.

Place a Transit Gateway in front of the VPC where the application is hosted and set up Network ACLs that block requests that originate from a specific country.

A

Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that originate from a specific country.
Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP Set.

58
Q

A company deployed a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. The Solutions Architect expects the S3 bucket to immediately receive over 2000 PUT requests and 3500 GET requests per second at peak hour.

What should the Solutions Architect do to ensure optimal performance?

Do nothing. Amazon S3 will automatically manage performance at this scale.
Use Byte-Range Fetches to retrieve multiple ranges of an object data per GET request.
Add a random prefix to the key names.
Use a predictable naming scheme in the key names such as sequential numbers or date time sequences.

A

Do Nothing

59
Q

The start-up company that you are working for has a batch job application that is currently hosted on an EC2 instance. It is set to process messages from a queue created in SQS with default settings. You configured the application to process the messages once a week. After 2 weeks, you noticed that not all messages are being processed by the application.

What is the root cause of this issue?

The batch job application is configured to long polling.
Amazon SQS has automatically deleted the messages that have been in a queue for more than the maximum message retention period.
Missing permissions in SQS.
The SQS queue is set to short-polling.

A

Amazon SQS has automatically deleted the messages that have been in a queue for more than the maximum message retention period.

59
Q

A company plans to design a highly available architecture in AWS. They have two target groups with three EC2 instances each, which are added to an Application Load Balancer. In the security group of the EC2 instance, you have verified that port 80 for HTTP is allowed. However, the instances are still showing out of service from the load balancer.

What could be the root cause of this issue?

The health check configuration is not properly defined.
The wrong subnet was used in your VPC
The wrong instance type was used for the EC2 instance.
The instances are using the wrong AMI.

A

The health check configuration is not properly defined.

60
Q

A company needs secure access to its Amazon RDS for MySQL database that is used by multiple applications. Each IAM user must use a short-lived authentication token to connect to the database.

Which of the following is the most suitable solution in this scenario?

Use an MFA token to access and connect to a database.
Use AWS Secrets Manager to generate and store short-lived authentication tokens.

Use IAM DB Authentication and create database accounts using the AWS-provided AWSAuthenticationPlugin plugin in MySQL.
Use AWS IAM Identity Center to access the RDS database.

A

Use IAM DB Authentication and create database accounts using the AWS-provided AWSAuthenticationPlugin plugin in MySQL.

61
Q

An operations team has an application running on EC2 instances inside two custom VPCs. The VPCs are located in the Ohio and N.Virginia Region respectively. The team wants to transfer data between the instances without traversing the public internet.

Which combination of steps will achieve this? (Select TWO.)

Set up a VPC peering connection between the VPCs.
Create an Egress-only Internet Gateway.
Deploy a VPC endpoint on each region to enable a private connection.
Re-configure the route table’s target and destination of the instances’ subnet.
Launch a NAT Gateway in the public subnet of each VPC.

A

Set up a VPC peering connection between the VPCs.
Re-configure the route table’s target and destination of the instances’ subnet.

62
Q

A financial analytics application that collects, processes and analyzes stock data in real-time is using Kinesis Data Streams. The producers continually push data to Kinesis Data Streams while the consumers process the data in real time.

In Amazon Kinesis, where can the consumers store their results? (Select TWO.)

AWS Glue
Amazon S3
Glacier Select
Amazon Redshift
Amazon Athena

A

Amazon S3
Amazon Redshift

(Glacier selet is not an option because its a query service)

63
Q

A company is using an Amazon RDS for MySQL 5.6 with Multi-AZ deployment enabled and several web servers across two AWS Regions. The database is currently experiencing highly dynamic reads due to the growth of the company’s website. The Solutions Architect tried to test the read performance from the secondary AWS Region and noticed a notable slowdown on the SQL queries.

Which of the following options would provide a read replication latency of less than 1 second?

Create an Amazon RDS for MySQL read replica in the secondary AWS Region.

Upgrade the MySQL database engine.

Use Amazon ElastiCache to improve database performance.

Migrate the existing database to Amazon Aurora and create a cross-region read replica.

A

Migrate the existing database to Amazon Aurora and create a cross-region read replica.

64
Q

A company is using Amazon S3 to store frequently accessed data. The S3 bucket is shared with external users that will upload files regularly. A Solutions Architect needs to implement a solution that will grant the bucket owner full access to all uploaded objects in the S3 bucket.

What action should be done to achieve this task?

Enable server access logging and set up an IAM policy that will require the users to set the object’s ACL to bucket-owner-full-control.
Create a CORS configuration in the S3 bucket.
Enable the Requester Pays feature in the Amazon S3 bucket.

Create a bucket policy that will require the users to set the object’s ACL to bucket-owner-full-control.

A

Create a bucket policy that will require the users to set the object’s ACL to bucket-owner-full-control.

65
Q

A disaster recovery team is planning to back up on-premises records to a local file server share through SMB protocol. To meet the company’s business continuity plan, the team must ensure that a copy of data from 48 hours ago is available for immediate access. Accessing older records with delay is tolerable.

Which should the DR team implement to meet the objective with the LEAST amount of configuration effort?

Create an AWS Backup plan to copy data backups to a local SMB share every 48 hours.
Use an AWS Storage File gateway with enough storage to keep data from the last 48 hours. Send the backups to an SMB share mounted as a local disk.
Create an SMB file share in Amazon FSx for Windows File Server that has enough storage to store all backups. Access the file share from on-premises.
Mount an Amazon EFS file system on the on-premises client and copy all backups to an NFS share.

A

Use an AWS Storage File gateway with enough storage to keep data from the last 48 hours. Send the backups to an SMB share mounted as a local disk.

65
Q

An online stock trading system is hosted in AWS and uses an Auto Scaling group of EC2 instances, an RDS database, and an Amazon ElastiCache for Redis. You need to improve the data security of your in-memory data store by requiring the user to enter a password before they are granted permission to execute Redis commands.

Which of the following should you do to meet the above requirement?

Enable the in-transit encryption for Redis replication groups.

Create a new Redis replication group and set the AtRestEncryptionEnabled parameter to true.

Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled.

Do nothing. This feature is already enabled by default.

A

Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled.

66
Q

A company plans to implement a network monitoring system in AWS. The Solutions Architect launched an EC2 instance to host the monitoring system and used CloudWatch to monitor, store, and access the log files of the instance.

Which of the following provides an automated way to send log data to CloudWatch Logs from the Amazon EC2 instance?

CloudWatch Logs agent
CloudTrail Processing Library
CloudTrail with log file validation
AWS Transfer for SFTP

A

CloudWatch Logs Agent

67
Q

What is cloudwatch logs agent?

A

CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service.

68
Q

A company developed a financial analytics web application hosted in a Docker container using MEAN (MongoDB, Express.js, AngularJS, and Node.js) stack. You want to easily port that web application to AWS Cloud which can automatically handle all the tasks such as balancing load, auto-scaling, monitoring, and placing your containers across your cluster.

Which of the following services can be used to fulfill this requirement?

Amazon Elastic Container Service (Amazon ECS)
AWS Elastic Beanstalk
AWS Compute Optimizer
AWS CloudFormation

A

AWS Elastic Beanstalk

69
Q

in AWS Fargate. To maintain performance, it should handle millions of requests per second sent by gamers around the globe while maintaining ultra-low latencies.

Which of the following must be implemented in the current architecture to satisfy the new requirement?

Launch a new microservice in AWS Fargate that acts as a load balancer since using an ALB or NLB with Fargate is not possible.
Create a new record in Amazon Route 53 with Weighted Routing policy to load balance the incoming traffic.
Launch a new Application Load Balancer.
Launch a new Network Load Balancer.

A

Launch A network load balancer

Network load balancers can handle TCP traffic

70
Q

An airline company receives a lot of requests to book flights, update booking details, and flight check-ins. Since these requests flood the customer support teams, the management wants to build a self-service solution that can handle these requests without a human agent. This solution should be text-based wherein users can type their concerns in a chat box and an AI will analyze their intention, provide answers, or fulfill pre-defined actions automatically.

Which of the following options is the recommended solution for the above requirements?

Deploy a conversational chatbot using Amazon Lex. Define conversation flow for specific user intentions. Integrate AWS Lambda functions as code hooks to perform actions based on user requests.
Deploy a conversational chatbot using Amazon Rekognition. Define conversation flow for specific user intentions. Create AWS Lambda functions that can be invoked depending on user intentions.
Work with an AWS Managed Service Provider (MSP) to deploy a conversational chatbot using Amazon Polly for natural-language processing (NLU). Integrate AWS Lambda functions as code hooks to perform actions based on user requests.
Create a conversational chatbot using Amazon Comprehend for natural-language processing (NLU). Depending on the user’s intent, invoke AWS Lambda functions that can perform the needed actions.

A

Deploy a conversational chatbot using AWS Lex (Has NLP Built in)

71
Q

A Solutions Architect needs to set up the required compute resources for the application which have workloads that require high, sequential read and write access to very large data sets on local storage.

Which of the following instance type is the most suitable one to use in this scenario?

Compute Optimized Instances
Memory Optimized Instances
General Purpose Instances
Storage Optimized Instances

A

Storage optimized instances

71
Q

A company has multiple AWS sandbox accounts that are used by its development team. All developers must be given access to the contents of one of the main account’s S3 buckets. For security purposes, any personally identifiable information (PII) or financial data uploaded in the bucket must be continuously monitored and removed.

How can this be done at the lowest possible cost and with the least amount of configuration effort?

Generate a pre-signed URL for the objects on the S3 bucket. Use the Amazon S3 Storage Lens to discover personally identifiable information (PII) or financial data.
Configure cross-account replication on the S3 bucket. Integrate AWS Audit Manager with the S3 bucket to discover any personally identifiable information (PII) or financial data.
Create an S3 bucket policy that grants access from the sandbox accounts. Use Amazon Macie to discover personally identifiable information (PII) or financial data.
Add S3 read permission to the IAM policy of each IAM user from the sandbox accounts. Use Amazon Detective to discover personally identifiable information (PII) or financial data.

A

Create an S3 bucket policy that grants access from the sandbox accounts. Use Amazon Macie to discover personally identifiable information (PII) or financial data.

72
Q

A company deployed a web application to an EC2 instance that adds a variety of photo effects to a picture uploaded by the users. The application will put the generated photos to an S3 bucket by sending PUT requests to the S3 API.

What is the best option for this scenario considering that you need to have API credentials to be able to send a request to the S3 API?

Store the API credentials in the root web application directory of the EC2 instance.
Create a role in IAM. Afterwards, assign this role to a new EC2 instance.
Store your API credentials in Amazon S3 Glacier.
Encrypt the API credentials and store in any directory of the EC2 instance.

A

Create a role in IAM. Afterwards, assign this role to a new EC2 instance.

73
Q

A multinational company has been building its new data analytics platform with high-performance computing workloads (HPC) which requires a scalable, POSIX-compliant storage service. The data need to be stored redundantly across multiple AZs and allows concurrent connections from thousands of EC2 instances hosted on multiple Availability Zones.

Which of the following AWS storage service is the most suitable one to use in this scenario?

Amazon S3
Amazon Elastic File System
Amazon ElastiCache
Amazon EBS Volumes

A

Amazon elastic file system

74
Q

A fast food company is using AWS to host their online ordering system which uses an Auto Scaling group of EC2 instances deployed across multiple Availability Zones with an Application Load Balancer in front. To better handle the incoming traffic from various digital devices, you are planning to implement a new routing system where requests which have a URL of <server>/api/android are forwarded to one specific target group named “Android-Target-Group”. Conversely, requests which have a URL of <server>/api/ios are forwarded to another separate target group named “iOS-Target-Group”.</server></server>

How can you implement this change in AWS?

Replace your ALB with a Network Load Balancer then use host conditions to define rules that forward requests to different target groups based on the URL in the request.

Replace your ALB with a Gateway Load Balancer then use path conditions to define rules that forward requests to different target groups based on the URL in the request.

Use path conditions to define rules that forward requests to different target groups based on the URL in the request.

Use host conditions to define rules that forward requests to different target groups based on the hostname in the host header. This enables you to support multiple domains using a single load balancer.

A

Use path conditions to define rules that forward requests to different target groups based on the URL in the request.

75
Q

A Solutions Architect is trying to enable Cross-Region Replication to an S3 bucket but this option is disabled. Which of the following options is a valid reason for this?

The Cross-Region Replication feature is only available for Amazon S3 – Infrequent Access.

In order to use the Cross-Region Replication feature in S3, you need to first enable versioning on the bucket.

This is a premium feature which is only for AWS Enterprise accounts.

The Cross-Region Replication feature is only available for Amazon S3 – One Zone-IA

A

In order to use the Cross-Region Replication feature in S3, you need to first enable versioning on the bucket.

76
Q

A Solutions Architect is implementing a new High-Performance Computing (HPC) system in AWS that involves orchestrating several Amazon Elastic Container Service (Amazon ECS) tasks with an EC2 launch type that is part of an Amazon ECS cluster. The system will be frequently accessed by users around the globe and it is expected that there would be hundreds of ECS tasks running most of the time. The Architect must ensure that its storage system is optimized for high-frequency read and write operations. The output data of each ECS task is around 10 MB but the obsolete data will eventually be archived and deleted so the total storage size won’t exceed 10 TB.

Which of the following is the MOST suitable solution that the Architect should recommend?

Launch an Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode and set the performance mode to General Purpose. Configure the EFS file system as the container mount point in the ECS task definition of the Amazon ECS cluster.

Set up an SMB file share by creating an Amazon FSx File Gateway in Storage Gateway. Set the file share as the container mount point in the ECS task definition of the Amazon ECS cluster.

Launch an Amazon DynamoDB table with Amazon DynamoDB Accelerator (DAX) and DynamoDB Streams enabled. Configure the table to be accessible by all Amazon ECS cluster instances. Set the DynamoDB table as the container mount point in the ECS task definition of the Amazon ECS cluster.

Launch an Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode and set the performance mode to Max I/O. Configure the EFS file system as the container mount point in the ECS task definition of the Amazon ECS cluster.

A

Launch an Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode and set the performance mode to Max I/O. Configure the EFS file system as the container mount point in the ECS task definition of the Amazon ECS cluster.

77
Q

A multinational manufacturing company has multiple accounts in AWS to separate their various departments such as finance, human resources, engineering and many others. There is a requirement to ensure that certain access to services and actions are properly controlled to comply with the security policy of the company.

As the Solutions Architect, which is the most suitable way to set up the multi-account AWS environment of the company?

Set up a common IAM policy that can be applied across all AWS accounts.

Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider.

Use AWS Organizations and Service Control Policies to control services on each account.

Connect all departments by setting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.

A

Use AWS Organizations and Service Control Policies to control services on each account.

78
Q

A company plans to host a movie streaming app in AWS. The chief information officer (CIO) wants to ensure that the application is highly available and scalable. The application is deployed to an Auto Scaling group of EC2 instances on multiple AZs. A load balancer must be configured to distribute incoming requests evenly to all EC2 instances across multiple Availability Zones.

Which of the following features should the Solutions Architect use to satisfy these criteria?

Amazon VPC IP Address Manager (IPAM)
Path-based Routing
AWS Direct Connect SiteLink
Cross-zone load balancing

A

Cross-zone load balancing

79
Q

A top university has recently launched its online learning portal where the students can take e-learning courses from the comforts of their homes. The portal is on a large On-Demand EC2 instance with a single Amazon Aurora database.

How can you improve the availability of your Aurora database to prevent any unnecessary downtime of the online portal?

Use an Asynchronous Key Prefetch in Amazon Aurora to improve the performance of queries that join tables across indexes.

Enable Hash Joins to improve the database query performance.

Deploy Aurora to two Auto-Scaling groups of EC2 instances across two Availability Zones with an elastic load balancer which handles load balancing.

Create Amazon Aurora Replicas.

A

Create Amazon Aurora Replicas.

80
Q

A global news network created a CloudFront distribution for their web application. However, you noticed that the application’s origin server is being hit for each request instead of the AWS Edge locations, which serve the cached objects. The issue occurs even for the commonly requested objects.

What could be a possible cause of this issue?

There are two primary origins configured in your Amazon CloudFront Origin Group.
The Cache-Control max-age directive is set to zero.
The file sizes of the cached objects are too large for CloudFront to handle.
An object is only cached by CloudFront once a successful request has been made hence, the objects were not requested before, which is why the request is still directed to the origin server.

A

The Cache-Control max-age directive is set to zero

81
Q

A company is running an on-premises application backed by a 1TB MySQL 8.0 database. A couple of times each month, the production data is fully copied to a staging database at the request of the analytics team. The team can’t work on the staging database until the copy is finished, which takes hours.

Throughout this period, the application experiences intermittent downtimes as well. To expedite the process for the analytics team, a solutions architect must redesign the application’s architecture in AWS. The application must also be highly resilient to disruptions.

Which combination of actions best satisfies the given set of requirements while being the most cost-effective? (Select TWO)

Clone the production database in the staging environment using Aurora cloning.

Replicate the production database to a staging database using the mysqldump client utility
Take a manual snapshot and restore it to a database in the staging environment.
Use an Amazon RDS database in a Multi-AZ Deployments configuration
Use an Amazon Aurora database with Multi-AZ Replicas.

A

Clone the production database in the staging environment using Aurora cloning.

Use an Amazon Aurora database with Multi-AZ Replicas.

82
Q

A company has an application architecture that stores both the access key ID and the secret access key in a plain text file on a custom Amazon Machine Image (AMI). The EC2 instances, which are created by using this AMI, are using the stored access keys to connect to a DynamoDB table.

What should the Solutions Architect do to make the current architecture more secure?

Put the access keys in Amazon Glacier instead.
Put the access keys in an Amazon S3 bucket instead.
Remove the stored access keys in the AMI. Create a new IAM role with permissions to access the DynamoDB table and assign it to the EC2 instances.
Do nothing. The architecture is already secure because the access keys are already in the Amazon Machine Image.

A

Remove the stored access keys in the AMI. Create a new IAM role with permissions to access the DynamoDB table and assign it to the EC2 instances.

83
Q

A company has a fleet of running Spot EC2 instances behind an Application Load Balancer. The incoming traffic comes from various users across multiple AWS regions, and you would like to have the user’s session shared among the fleet of instances.

A Solutions Architect is required to set up a distributed session management layer that will provide scalable and shared data storage for the user sessions that supports multithreaded performance. The cache layer must also detect any node failures and replace the failed ones automatically.

Which of the following would be the best choice to meet the requirement while still providing sub-millisecond latency for the users?

Amazon RDS database with RDS Proxy
Amazon ElastiCache for Redis Global Datastore
AWS ELB sticky sessions
Amazon ElastiCache for Memcached with Auto Discovery

A

Amazon ElastiCache for Memcached with Auto Discovery

84
Q

A commercial bank has designed its next-generation online banking platform to use a distributed system architecture. As their Software Architect, you have to ensure that their architecture is highly scalable, yet still cost-effective.

Which of the following will provide the most suitable solution for this scenario?

Launch an Auto-Scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or scale out the number of EC2 instances based on the queue.

Launch multiple EC2 instances behind an Application Load Balancer to host your application services and SNS which will act as a highly-scalable buffer that stores messages as they travel between distributed applications.

Launch multiple On-Demand EC2 instances to host your application services and an SQS queue which will act as a highly-scalable buffer that stores messages as they travel between distributed applications.

Launch multiple EC2 instances behind an Application Load Balancer to host your application services, and SWF which will act as a highly-scalable buffer that stores messages as they travel between distributed applications.

A

Launch an Auto-Scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or scale out the number of EC2 instances based on the queue.

85
Q

A web application, which is hosted in your on-premises data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved security, you have to use the profile credentials specific to your EC2 instance to access your database, instead of a password.

Which of the following should you do to meet the above requirement?

Launch a new RDS database instance with the Backtrack feature enabled.

Configure your RDS database to enable encryption.

Set up an RDS database and enable the IAM DB Authentication.

Launch the mysql client using the –ssl-ca parameter when connecting to the database.

A

Set up an RDS database and enable the IAM DB Authentication.

86
Q

AWS Global Accelerator

A

service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances.

87
Q

A DevOps Engineer is required to design a cloud architecture in AWS. The Engineer is planning to develop a highly available and fault-tolerant architecture consisting of an Elastic Load Balancer and an Auto Scaling group of EC2 instances deployed across multiple Availability Zones. This will be used by an online accounting application that requires path-based routing, host-based routing, and bi-directional streaming using Remote Procedure Call (gRPC).

Which configuration will satisfy the given requirement?

Configure an Application Load Balancer in front of the auto-scaling group. Select gRPC as the protocol version.

Configure a Network Load Balancer in front of the auto-scaling group. Use a UDP listener for routing.

Configure a Network Load Balancer in front of the auto-scaling group. Create an AWS Global Accelerator accelerator and set the load balancer as an endpoint.

Configure a Gateway Load Balancer in front of the auto-scaling group. Ensure that the IP Listener Routing uses the GENEVE protocol on port 6081 to allow gRPC response traffic.

A

Configure an Application Load Balancer in front of the auto-scaling group. Select gRPC as the protocol version.

88
Q

A company runs a messaging application in the ap-northeast-1 and ap-southeast-2 region. A Solutions Architect needs to create a routing policy wherein a larger portion of traffic from the Philippines and North India will be routed to the resource in the ap-northeast-1 region.

Which Route 53 routing policy should the Solutions Architect use?

Weighted Routing
Geolocation Routing
Geoproximity Routing
Latency Routing

A

Geoproximity Routing

89
Q

Define the following:

Latency Routing

Geoproximity Routing

Geolocation Routing

Weighted Routing

A

Latency Routing lets Amazon Route 53 serve user requests from the AWS Region that provides the lowest latency. It does not, however, guarantee that users in the same geographic region will be served from the same location.

Geoproximity Routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or less to a given resource by specifying a value, known as a bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource.

Geolocation Routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.

Weighted Routing lets you associate multiple resources with a single domain name (tutorialsdojo.com) or subdomain name (subdomain.tutorialsdojo.com) and choose how much traffic is routed to each resource.

90
Q

An e-commerce company is receiving a large volume of sales data files in .csv format from its external partners on a daily basis. These data files are then stored in an Amazon S3 Bucket for processing and reporting purposes.

The company wants to create an automated solution to convert these .csv files into Apache Parquet format and store the output of the processed files in a new S3 bucket called “tutorialsdojo-data-transformed”. This new solution is meant to enhance the company’s data processing and analytics workloads while keeping its operating costs low.

Which of the following options must be implemented to meet these requirements with the LEAST operational overhead?

Integrate Amazon EMR File System (EMRFS) with the source S3 bucket to automatically discover the new data files. Use an Amazon EMR Serverless with Apache Spark to convert the .csv files to the Apache Parquet format and then store the output in the “tutorialsdojo-data-transformed” bucket.
Utilize an AWS Batch job definition with Bash syntax to convert the .csv files to the Apache Parquet format. Configure the job definition to run automatically whenever a new .csv file is uploaded to the source bucket.

Use Amazon S3 event notifications to trigger an AWS Lambda function that converts .csv files to Apache Parquet format using Apache Spark on an Amazon EMR cluster. Save the processed files to the “tutorialsdojo-data-transformed” bucket.

Use AWS Glue crawler to automatically discover the raw data file in S3 as well as check its corresponding schema. Create a scheduled ETL job in AWS Glue that will convert .csv files to Apache Parquet format and store the output of the processed files in the “tutorialsdojo-data-transformed” bucket.

A

Use AWS Glue crawler to automatically discover the raw data file in S3 as well as check its corresponding schema. Create a scheduled ETL job in AWS Glue that will convert .csv files to Apache Parquet format and store the output of the processed files in the “tutorialsdojo-data-transformed” bucket.

(AWS Glue is a fully managed ETL service, making it the option that involves the least operational overhead to pull off)

91
Q

A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS Direct Connect connection and a virtual interface to connect their on-premises network with VPC-1.

Which of the following options increase the fault tolerance of the connection to VPC-1? (Select TWO.)

Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.

Establish a hardware VPN over the Internet between VPC-2 and the on-premises network.

Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.

Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.

Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.

A

Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.

Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.

92
Q

A digital media company shares static content to its premium users around the world and also to their partners who syndicate their media files. The company is looking for ways to reduce its server costs and securely deliver their data to their customers globally with low latency.

Which combination of services should be used to provide the MOST suitable and cost-effective architecture? (Select TWO.)

AWS Fargate
Amazon CloudFront
AWS Lambda
Amazon S3
AWS Global Accelerator

A

AWS Cloudfrot
Amazon S3

(It’s not AWS Global Accelerator because that’s used for non-HTTP connections)

93
Q

A software company has resources hosted in AWS and on-premises servers. You have been requested to create a decoupled architecture for applications which make use of both resources.

Which of the following options are valid? (Select TWO.)

Use RDS to utilize both on-premises servers and EC2 instances for your decoupled application

Use SWF to utilize both on-premises servers and EC2 instances for your decoupled application

Use VPC peering to connect both on-premises servers and EC2 instances for your decoupled application

Use SQS to utilize both on-premises servers and EC2 instances for your decoupled application

Use DynamoDB to utilize both on-premises servers and EC2 instances for your decoupled application

A

Use SQS to utilize both on-premises servers and EC2 instances for your decoupled application

Use SWF to utilize both on-premises servers and EC2 instances for your decoupled application

(Pay attention to the word “decoupled” in this scenario)

94
Q

A company plans to migrate all of their applications to AWS. The Solutions Architect suggested to store all the data to EBS volumes. The Chief Technical Officer is worried that EBS volumes are not appropriate for the existing workloads due to compliance requirements, downtime scenarios, and IOPS performance.

Which of the following are valid points in proving that EBS is the best service to use for migration? (Select TWO.)

EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.

An EBS volume is off-instance storage that can persist independently from the life of an instance.

When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component.

Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon RDS, where it is stored redundantly in multiple Availability Zones

EBS volumes can be attached to any EC2 Instance in any Availability Zone.

A

EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.

An EBS volume is off-instance storage that can persist independently from the life of an instance.

95
Q

List some important information about EBS Volumes:

A

– When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to a failure of any single hardware component.

– After you create a volume, you can attach it to any EC2 instance in the same Availability Zone

– Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1) volume to multiple Nitro-based instances that are in the same Availability Zone. However, other EBS types are not supported.

– An EBS volume is off-instance storage that can persist independently from the life of an instance. You can specify not to terminate the EBS volume when you terminate the EC2 instance during instance creation.

– EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.

– Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256)

– EBS Volumes offer 99.999% SLA.

96
Q

A multinational company currently operates multiple AWS accounts to support its operations across various branches and business units. The company needs a more efficient and secure approach in managing its vast AWS infrastructure to avoid costly operational overhead.

To address this, they plan to transition to a consolidated, multi-account architecture while integrating a centralized corporate directory service for authentication purposes.

Which combination of options can be used to meet the above requirements? (Select TWO.)

-Set up a new entity in AWS Organizations and configure its authentication system to utilize AWS Directory Service directly.

-Utilize AWS CloudTrail to enable centralized logging and monitoring across all AWS accounts.

=Establish an identity pool through Amazon Cognito and adjust the AWS IAM Identity Center settings to allow Amazon Cognito authentication.

-Integrate AWS IAM Identity Center with the corporate directory service for centralized authentication. Configure a service control policy (SCP) to manage the AWS accounts.

-Implement AWS Organizations to create a multi-account architecture that provides a consolidated view and centralized management of AWS accounts.

A

Integrate AWS IAM Identity Center with the corporate directory service for centralized authentication. Configure a service control policy (SCP) to manage the AWS accounts.
Implement AWS Organizations to create a multi-account architecture that provides a consolidated view and centralized management of AWS accounts.

97
Q

A company has a static corporate website hosted in a standard S3 bucket and a new web domain name that was registered using Route 53. You are instructed by your manager to integrate these two services in order to successfully launch their corporate website.

What are the prerequisites when routing traffic using Amazon Route 53 to a website that is hosted in an Amazon S3 Bucket? (Select TWO.)

The S3 bucket name must be the same as the domain name
The record set must be of type “MX”
A registered domain name
The S3 bucket must be in the same region as the hosted zone
The Cross-Origin Resource Sharing (CORS) option should be enabled in the S3 bucket

A

The S3 bucket name must be the same as the domain name
A registered domain name

98
Q

A company has a top priority requirement to monitor a few database metrics and then afterward, send email notifications to the Operations team in case there is an issue. Which AWS services can accomplish this requirement? (Select TWO.)

Amazon EC2 Instance with a running Berkeley Internet Name Domain (BIND) Server.
Amazon Simple Email Service
Amazon Simple Notification Service (SNS)
Amazon Simple Queue Service (SQS)
Amazon CloudWatch

A

Amazon Simple Notification Service (SNS)
Amazon CloudWatch

99
Q

An organization needs to control the access for several S3 buckets. They plan to use a gateway endpoint to allow access to trusted buckets.

Which of the following could help you achieve this requirement?

Generate a bucket policy for trusted VPCs.
Generate a bucket policy for trusted S3 buckets.
Generate an endpoint policy for trusted S3 buckets.
Generate an endpoint policy for trusted VPCs.

A

Generate an endpoint policy for trusted S3 buckets.

100
Q

A Solutions Architect is building a cloud infrastructure where EC2 instances require access to various AWS services such as S3 and Redshift. The Architect will also need to provide access to system administrators so they can deploy and test their changes.

Which configuration should be used to ensure that the access to the resources is secured and not compromised? (Select TWO.)

Store the AWS Access Keys in the EC2 instance.
Assign an IAM user for each Amazon EC2 Instance.
Store the AWS Access Keys in ACM.
Assign an IAM role to the Amazon EC2 instance.
Enable Multi-Factor Authentication.

A

Enable Multi-Factor Authentication.
Assign an IAM role to the Amazon EC2 instance.

101
Q

A Solutions Architect of a multinational gaming company develops video games for PS4, Xbox One, and Nintendo Switch consoles, plus a number of mobile games for Android and iOS. Due to the wide range of their products and services, the architect proposed that they use API Gateway.

What are the key features of API Gateway that the architect can tell to the client? (Select TWO.)

It automatically provides a query language for your APIs similar to GraphQL.
Provides you with static anycast IP addresses that serve as a fixed entry point to your applications hosted in one or more AWS Regions.
Enables you to build RESTful APIs and WebSocket APIs that are optimized for serverless workloads.
Enables you to run applications requiring high levels of inter-node communications at scale on AWS through its custom-built operating system (OS) bypass hardware interface.
You pay only for the API calls you receive and the amount of data transferred out.

A

Enables you to build RESTful APIs and WebSocket APIs that are optimized for serverless workloads.

You pay only for the API calls you receive and the amount of data transferred out.

102
Q

A company developed a meal planning application that provides meal recommendations for the week as well as the food consumption of the users. The application resides on an EC2 instance which requires access to various AWS services for its day-to-day operations.

Which of the following is the best way to allow the EC2 instance to access the S3 bucket and other AWS services?

Add the API Credentials in the Security Group and assign it to the EC2 instance.
Create a role in IAM and assign it to the EC2 instance.
Store the API credentials in a bastion host.
Store the API credentials in the EC2 instance.

A

Create a role in IAM and assign it to the EC2 instance.

102
Q

As part of the Business Continuity Plan of your company, your IT Director instructed you to set up an automated backup of all of the EBS Volumes for your EC2 instances as soon as possible.

What is the fastest and most cost-effective solution to automatically back up all of your EBS Volumes?

Use an EBS-cycle policy in Amazon S3 to automatically back up the EBS volumes.
Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS snapshots.
Set your Amazon Storage Gateway with EBS volumes as the data source and store the backups in your on-premises servers through the storage gateway.
For an automated solution, create a scheduled job that calls the “create-snapshot” command via the AWS CLI to take a snapshot of production EBS volumes periodically.

A

Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS snapshots.

103
Q

A company plans to migrate its suite of containerized applications running on-premises to a container service in AWS. The solution must be cloud-agnostic and use an open-source platform that can automatically manage containerized workloads and services. It should also use the same configuration and tools across various production environments.

What should the Solution Architect do to properly migrate and satisfy the given requirement?

Migrate the application to Amazon Container Registry (ECR) with Amazon EC2 instance worker nodes.
Migrate the application to Amazon Elastic Kubernetes Service with EKS worker nodes.
Migrate the application to Amazon Elastic Container Service with ECS tasks that use the AWS Fargate launch type.
Migrate the application to Amazon Elastic Container Service with ECS tasks that use the Amazon EC2 launch type.

A

Migrate the application to Amazon Elastic Kubernetes Service with EKS worker nodes.

104
Q

A company is running a custom application in an Auto Scaling group of Amazon EC2 instances. Several instances are failing due to insufficient swap space. The Solutions Architect has been instructed to troubleshoot the issue and effectively monitor the available swap space of each EC2 instance.

Which of the following options fulfills this requirement?

Create a CloudWatch dashboard and monitor the SwapUsed metric.

Enable detailed monitoring on each instance and monitor the SwapUtilization metric.

Create a new trail in AWS CloudTrail and configure Amazon
CloudWatch Logs to monitor your trail logs.

Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.

A

Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.

105
Q

A company developed a web application and deployed it on a fleet of EC2 instances that uses Amazon SQS. The requests are saved as messages in the SQS queue, which is configured with the maximum message retention period. However, after thirteen days of operation, the web application suddenly crashed and there are 10,000 unprocessed messages that are still waiting in the queue. Since they developed the application, they can easily resolve the issue but they need to send a communication to the users on the issue.

What information should they provide and what will happen to the unprocessed messages?

-Tell the users that the application will be operational shortly, however, requests sent over three days ago will need to be resubmitted.
-Tell the users that unfortunately, they have to resubmit all the requests again.
-Tell the users that unfortunately, they have to resubmit all of the requests since the queue would not be able to process the 10,000 messages together.
-Tell the users that the application will be operational shortly and all received requests will be processed after the web application is restarted.

A

Tell the users that the application will be operational shortly and all received requests will be processed after the web application is restarted.

(120,000 limit on standard queue and 12,000 on FIFO queue)

106
Q

A company wants to streamline the process of creating multiple AWS accounts within an AWS Organization. Each organization unit (OU) must be able to launch new accounts with preapproved configurations from the security team which will standardize the baselines and network configurations for all accounts in the organization.

Which solution entails the least amount of effort to implement?

Configure AWS Resource Access Manager (AWS RAM) to launch new AWS accounts as well as standardize the baselines and network configurations for each organization unit

Set up an AWS Config aggregator on the root account of the organization to enable multi-account, multi-region data aggregation. Deploy conformance packs to standardize the baselines and network configurations for all accounts.

Centralized the creation of AWS accounts using AWS Systems Manager OpsCenter. Enforce policies and detect violations to all AWS accounts using AWS Security Hub.

Set up an AWS Control Tower Landing Zone. Enable pre-packaged guardrails to enforce policies or detect violations.

A
107
Q

An online events registration system is hosted in AWS and uses ECS to host its front-end tier and an RDS configured with Multi-AZ for its database tier. What are the events that will make Amazon RDS automatically perform a failover to the standby replica? (Select TWO.)

Loss of availability in primary Availability Zone
Storage failure on primary
Storage failure on secondary DB instance
In the event of Read Replica failure
Compute unit failure on secondary DB instance

A

Loss of availability in primary Availability Zone
Storage failure on primary

108
Q

An accounting application uses an RDS database configured with Multi-AZ deployments to improve availability. What would happen to RDS if the primary database instance fails?

The IP address of the primary DB instance is switched to the standby DB instance.
The canonical name record (CNAME) is switched from the primary to standby instance.
The primary database instance will reboot.
A new database instance is created in the standby Availability Zone

A

The canonical name record (CNAME) is switched from the primary to standby instance.

109
Q

A company has an application that continually sends encrypted documents to Amazon S3. The company requires that the configuration for data access is in line with their strict compliance standards. They should also be alerted if there is any risk of unauthorized access or suspicious access patterns.

Which step is needed to meet the requirements?

Use Amazon GuardDuty to monitor malicious activity on S3.
Use Amazon Inspector to alert whenever a security violation is detected on S3.
Use Amazon Macie to monitor and detect access patterns on S3.
Use Amazon Rekognition to monitor and recognize patterns on S3.

A

Use Amazon GuardDuty to monitor malicious activity on S3.

110
Q

A company wants to organize the way it tracks its spending on AWS resources. A report that summarizes the total billing accrued by each department must be generated at the end of the month.

Which solution will meet the requirements?

Tag resources with the department name and configure a budget action in AWS Budget.
Tag resources with the department name and enable cost allocation tags.
Create a Cost and Usage report for AWS services that each department is using.

Use AWS Cost Explorer to view spending and filter usage data by Resource.

A

Tag resources with the department name and enable cost allocation tags.

111
Q

An organization is currently using a tape backup solution to store its application data on-premises. They plan to use a cloud storage service to preserve the backup data for up to 10 years that may be accessed about once or twice a year.

Which of the following is the most cost-effective option to implement this solution?

Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier.
Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier Deep Archive.
Use Amazon S3 to store the backup data and add a lifecycle rule to transition the current version to Amazon S3 Glacier.
Order an AWS Snowball Edge appliance to import the backup directly to Amazon S3 Glacier.

A

Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier Deep Archive.

112
Q

A company receives semi-structured and structured data from different sources, which are eventually stored in their Amazon S3 data lake. The Solutions Architect plans to use big data processing frameworks to analyze these data and access it using various business intelligence tools and standard SQL queries.

Which of the following provides the MOST high-performing solution that fulfills this requirement?

Create an Amazon EC2 instance and store the processed data in Amazon EBS.
Create an Amazon EMR cluster and store the processed data in Amazon Redshift.
Use AWS Glue and store the processed data in Amazon S3.
Use Amazon Managed Service for Apache Flink Studio and store the processed data in Amazon DynamoDB.

A

Create an Amazon EMR cluster and store the processed data in Amazon Redshift.

113
Q

A company has an enterprise web application hosted on Amazon ECS Docker containers that use an Amazon FSx for Lustre filesystem for its high-performance computing workloads. A warm standby environment is running in another AWS region for disaster recovery. A Solutions Architect was assigned to design a system that will automatically route the live traffic to the disaster recovery (DR) environment only in the event that the primary application stack experiences an outage.

What should the Architect do to satisfy this requirement?

Set up a Weighted routing policy configuration in Route 53 by adding health checks on both the primary stack and the DR environment. Configure the network access control list and the route table to allow Route 53 to send requests to the endpoints specified in the health checks. Enable the Evaluate Target Health option by setting it to Yes.

Set up a CloudWatch Events rule to monitor the primary Route 53 DNS endpoint and create a custom Lambda function. Execute the ChangeResourceRecordSets API call using the function to initiate the failover to the secondary DNS record.

Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and create a custom Lambda function. Execute the ChangeResourceRecordSets API call using the function to initiate the failover to the secondary DNS record.

Set up a failover routing policy configuration in Route 53 by adding a health check on the primary service endpoint. Configure Route 53 to direct the DNS queries to the secondary record when the primary resource is unhealthy. Configure the network access control list and the route table to allow Route 53 to send requests to the endpoints specified in the health checks. Enable the Evaluate Target Health option by setting it to Yes

A

Set up a failover routing policy configuration in Route 53 by adding a health check on the primary service endpoint. Configure Route 53 to direct the DNS queries to the secondary record when the primary resource is unhealthy. Configure the network access control list and the route table to allow Route 53 to send requests to the endpoints specified in the health checks. Enable the Evaluate Target Health option by setting it to Yes

114
Q

A company hosts its web application on a set of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The application has an embedded NoSQL database. As the application receives more traffic, the application becomes overloaded mainly due to database requests. The management wants to ensure that the database is eventually consistent and highly available.

Which of the following options can meet the company requirements with the least operational overhead?

-Change the ALB with a Network Load Balancer (NLB) to handle more traffic and integrate AWS Global Accelerator to ensure high availability. Configure replication of the NoSQL database on the set of Amazon EC2 instances to spread the database load.
-Configure the Auto Scaling group to spread the Amazon EC2 instances across three Availability Zones. Use the AWS Database Migration Service (DMS) with a replication server and an ongoing replication task to migrate the embedded NoSQL database to Amazon DynamoDB
-Configure the Auto Scaling group to spread the Amazon EC2 instances across three Availability Zones. Configure replication of the NoSQL database on the set of Amazon EC2 instances to spread the database load.
-Change the ALB with a Network Load Balancer (NLB) to handle more traffic. Use the AWS Migration Service (DMS) to migrate the embedded NoSQL database to Amazon DynamoDB.

A

-Configure the Auto Scaling group to spread the Amazon EC2 instances across three Availability Zones. Use the AWS Database Migration Service (DMS) with a replication server and an ongoing replication task to migrate the embedded NoSQL database to Amazon DynamoDB

115
Q

A business has a network of surveillance cameras installed within the premises of its data center. Management wants to leverage Artificial Intelligence to monitor and detect unauthorized personnel entering restricted areas. Should an unauthorized person be detected, the security team must be alerted via SMS.

Which solution satisfies the requirement?

-Set up Amazon Managed Service for Prometheus to stream live feeds from the cameras. -Use Amazon Fraud Detector to detect unauthorized personnel. Set the phone numbers of the security as subscribers to an SNS topic.
-Use Amazon Kinesis Video to stream live feeds from the cameras. Use Amazon Rekognition to detect authorized personnel. Set the phone numbers of the security as subscribers to an SNS topic.
-Configure Amazon Elastic Transcoder to stream live feeds from the cameras. Use Amazon Kendra to detect authorized personnel. Set the phone numbers of the security as subscribers to an SNS topic.
-Replace the existing cameras with AWS IoT. Upload a face detection model to the AWS IoT devices and send them over to AWS Control Tower for checking and notification

A

-Use Amazon Kinesis Video to stream live feeds from the cameras. Use Amazon Rekognition to detect authorized personnel. Set the phone numbers of the security as subscribers to an SNS topic.

116
Q

A music publishing company is building a multitier web application that requires a key-value store which will save the document models. Each model is composed of band ID, album ID, song ID, composer ID, lyrics, and other data. The web tier will be hosted in an Amazon ECS cluster with AWS Fargate launch type.

Which of the following is the MOST suitable setup for the database-tier?

Launch an Amazon Aurora Serverless database.
Use Amazon WorkDocs to store the document models.
Launch an Amazon RDS database with Read Replicas.
Launch a DynamoDB table.

A

Launch a DynamoDB table.

117
Q

A company is building an internal application that serves as a repository for images uploaded by a couple of users. Whenever a user uploads an image, it would be sent to Kinesis Data Streams for processing before it is stored in an S3 bucket. If the upload was successful, the application will return a prompt informing the user that the operation was successful. The entire processing typically takes about 5 minutes to finish.

Which of the following options will allow you to asynchronously process the request to the application from upload request to Kinesis, S3, and return a reply in the most cost-effective manner?

Use a combination of SNS to buffer the requests and then asynchronously process them using On-Demand EC2 Instances.
Replace the Kinesis Data Streams with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests.
Use a combination of SQS to queue the requests and then asynchronously process them using On-Demand EC2 Instances.
Use a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests.

A

Replace the Kinesis Data Streams with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests.

118
Q

An application is hosted in AWS Fargate and uses RDS database in Multi-AZ Deployments configuration with several Read Replicas. A Solutions Architect was instructed to ensure that all of their database credentials, API keys, and other secrets are encrypted and rotated on a regular basis to improve data security. The application should also use the latest version of the encrypted credentials when connecting to the RDS database.

Which of the following is the MOST appropriate solution to secure the credentials?

Store the database credentials, API keys, and other secrets to Systems Manager Parameter Store each with a SecureString data type. The credentials are automatically rotated by default.
Use AWS Secrets Manager to store and encrypt the database credentials, API keys, and other secrets. Enable automatic rotation for all of the credentials.
Store the database credentials, API keys, and other secrets to AWS ACM.
Store the database credentials, API keys, and other secrets in AWS KMS.

A

Use AWS Secrets Manager to store and encrypt the database credentials, API keys, and other secrets. Enable automatic rotation for all of the credentials.

119
Q

A commercial bank has a forex trading application. They created an Auto Scaling group of EC2 instances that allow the bank to cope with the current traffic and achieve cost-efficiency. They want the Auto Scaling group to behave in such a way that it will follow a predefined set of parameters before it scales down the number of EC2 instances, which protects the system from unintended slowdown or unavailability.

Which of the following statements are true regarding the cooldown period? (Select TWO.)

It ensures that before the Auto Scaling group scales out, the EC2 instances have an ample time to cooldown.
Its default value is 300 seconds.
It ensures that the Auto Scaling group launches or terminates additional EC2 instances without any downtime.
Its default value is 600 seconds.
It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.

A

Its default value is 300 seconds.
It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.

120
Q

A media company wants to ensure that the images it delivers through Amazon CloudFront are compatible across various user devices. The company plans to serve images in WebP format to user agents that support it and return to JPEG format for those that don’t. Additionally, they want to add a custom header to the response for tracking purposes.

As a solution architect, what approach would you recommend to meet these requirements while minimizing operational overhead?

Implement an image conversion service on EC2 instances and integrate it with CloudFront. Use Lambda functions to modify the response headers and serve the appropriate format based on the User-Agent header.
Generate a CloudFront response headers policy. Utilize the policy to deliver the suitable image format according to the User-Agent HTTP header in the incoming request.
Create multiple CloudFront distributions, each serving a specific image format (WebP or JPEG). Route incoming requests based on the User-Agent header to the respective distribution using Amazon Route 53.
Configure CloudFront behaviors to handle different image formats based on the User-Agent header. Use Lambda@Edge functions to modify the response headers and serve the appropriate format.

A

Configure CloudFront behaviors to handle different image formats based on the User-Agent header. Use Lambda@Edge functions to modify the response headers and serve the appropriate format.

120
Q

A company owns a photo-sharing app that stores user uploads on Amazon S3. There has been an increase in the number of explicit and offensive images being reported. The company currently relies on human efforts to moderate content, and they want to streamline this process by using Artificial Intelligence to only flag images for review. For added security, any communication with your resources on your Amazon VPC must not traverse the public Internet.

How can this task be accomplished with the LEAST amount of effort?

-Use Amazon Monitron to monitor each user upload in S3. Use the AWS Transit Gateway Network Manager to block any outbound requests to the public Internet.
-Use an image classification model in Amazon SageMaker. Set up Amazon GuardDuty and connect it with Amazon SageMaker to ensure that all communications do not traverse the public Internet.
-Use Amazon Detective to detect images with graphic nudity or violence in Amazon S3. Ensure that all communications made by your AWS resources do not traverse the public Internet via the AWS Audit Manager service.
-Use Amazon Rekognition to detect images with graphic nudity or violence in Amazon S3. Create an Interface VPC endpoint for Amazon Rekognition with the necessary policies to prevent any traffic from traversing the public Internet.

A

-Use Amazon Rekognition to detect images with graphic nudity or violence in Amazon S3. Create an Interface VPC endpoint for Amazon Rekognition with the necessary policies to prevent any traffic from traversing the public Internet.

120
Q

A company has developed public APIs hosted in Amazon EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. A Solutions Architect received a report that the web service clients can only access trusted IP addresses whitelisted on their firewalls.

What should you do to accomplish the above requirement?

Associate an Elastic IP address to a Network Load Balancer.
Create a CloudFront distribution whose origin points to the private IP addresses of your web servers.
Associate an Elastic IP address to an Application Load Balancer.
Create an Alias Record in Route 53 which maps to the DNS name of the load balancer.

A

Associate an Elastic IP address to a Network Load Balancer.

120
Q

A company has two On-Demand EC2 instances inside the Virtual Private Cloud in the same Availability Zone but are deployed to different subnets. One EC2 instance is running a database and the other EC2 instance a web application that connects with the database. You need to ensure that these two instances can communicate with each other for the system to work properly.

What are the things you have to check so that these EC2 instances can communicate inside the VPC? (Select TWO.)

-Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol.
-Check the Network ACL if it allows communication between the two subnets.
-Check if both instances are the same instance class.
-Check if the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate.
-Ensure that the EC2 instances are in the same Placement Group.

A

-Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol.
-Check the Network ACL if it allows communication between the two subnets.

121
Q

A company needs to assess and audit all the configurations in their AWS account. It must enforce strict compliance by tracking all configuration changes made to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified automatically to avoid data breaches.

Which of the following options will meet this requirement?

Use AWS CloudTrail and review the event history of your AWS account.
Use AWS Config to set up a rule in your AWS account.
Use AWS IAM to generate a credential report.
Use AWS Trusted Advisor to analyze your AWS environment.

A

AWS Config

122
Q

For data privacy, a healthcare company has been asked to comply with the Health Insurance Portability and Accountability Act (HIPAA). The company stores all its backups on an Amazon S3 bucket. It is required that data stored on the S3 bucket must be encrypted.

What is the best option to do this? (Select TWO.)

-Enable Server-Side Encryption on an S3 bucket to make use of AES-256 encryption.
-Store the data in encrypted EBS snapshots.
-Store the data on EBS volumes with encryption enabled instead of using Amazon S3.
-Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys.
-Enable Server-Side Encryption on an S3 bucket to make use of AES-128 encryption.

A

-Enable Server-Side Encryption on an S3 bucket to make use of AES-256 encryption.

-Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys.

123
Q
A
124
Q
A