SSA Flashcards
A hospital has a mission-critical application that uses a RESTful API powered by Amazon API Gateway and AWS Lambda. The medical officers upload PDF reports to the system which are then stored as static media content in an Amazon S3 bucket.
The security team wants to improve its visibility when it comes to cyber-attacks and ensure HIPAA (Health Insurance Portability and Accountability Act) compliance. The company is searching for a solution that continuously monitors object-level S3 API operations and identifies protected health information (PHI) in the reports, with minimal changes in their existing Lambda function.
Which of the following solutions will meet these requirements with the LEAST operational overhead?
Use Amazon Textract Medical with PII redaction turned on to extract and filter sensitive text from the PDF reports. Create a new Lambda function that calls the regular Amazon Comprehend API to identify the PHI from the extracted text.
Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text.
Use Amazon Transcribe to read and analyze the PDF reports using the StartTranscriptionJob API operation.
Use Amazon SageMaker Ground Truth to label and detect protected health information (PHI) content with low-confidence predictions.
Use Amazon Rekognition to extract the text data from the PDF reports. Integrate the Amazon Comprehend Medical service with the existing Lambda functions to identify the PHI from the extracted text.
Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text.
A company has a web-based order processing system that is currently using a standard queue in Amazon SQS. The IT Manager noticed that there are a lot of cases where an order was processed twice. This issue has caused a lot of trouble in processing and made the customers very unhappy. The manager has asked you to ensure that this issue will not recur.
What can you do to prevent this from happening again in the future? (Select TWO.)
Change the message size in SQS.
Alter the visibility timeout of SQS.
Alter the retention period in Amazon SQS.
Replace Amazon SQS and instead, use Amazon Simple Workflow service.
Use an Amazon SQS FIFO Queue instead.
Replace Amazon SQS and instead, use Amazon Simple Workflow service.
Use an Amazon SQS FIFO Queue instead.
A company launched an EC2 instance in the newly created VPC. They noticed that the generated instance does not have an associated DNS hostname.
Which of the following options could be a valid reason for this issue?
The newly created VPC has an invalid CIDR block.
Amazon Route 53 is not enabled.
The DNS resolution and DNS hostname of the VPC configuration should be enabled.
The security group of the EC2 instance needs to be modified.
The DNS resolution and DNS hostname of the VPC configuration should be enabled.
To save costs, your manager instructed you to analyze and review the setup of your AWS cloud infrastructure. You should also provide an estimate of how much your company will pay for all of the AWS resources that they are using.
In this scenario, which of the following will incur costs? (Select TWO.)
A running EC2 Instance
A stopped On-Demand EC2 Instance
Public Data Set
Using an Amazon VPC
EBS Volumes attached to stopped EC2 Instances
A running EC2 Instance
EBS Volumes attached to stopped EC2 Instances
A tech company currently has an on-premises infrastructure. They are currently running low on storage and want to have the ability to extend their storage using the AWS cloud.
Which AWS service can help them achieve this requirement?
Amazon Storage Gateway
Amazon EC2
Amazon SQS
Amazon Elastic Block Storage
Amazon Storage Gateway
What is Amazon Storage Gateway?
AWS storage gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between on-premises ENV and AWS storage infra
A company has a set of Linux servers running on multiple On-Demand EC2 Instances. The Audit team wants to collect and process the application log files generated from these servers for their report.
Which of the following services is best to use in this case?
A single On-Demand Amazon EC2 instance for both storing and processing the log files
Amazon S3 Glacier for storing the application log files and Spot EC2 Instances for processing them.
Amazon S3 Glacier Deep Archive for storing the application log files and AWS ParallelCluster for processing the log files.
Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files.
Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files.
A company is using an Auto Scaling group which is configured to launch new t2.micro EC2 instances when there is a significant load increase in the application. To cope with the demand, you now need to replace those instances with a larger t2.2xlarge instance type.
How would you implement this change?
Change the instance type of each EC2 instance manually.
Create a new version of the launch template with the new instance type and update the Auto Scaling Group.
Create another Auto Scaling Group and attach the new instance type.
Just change the instance type to t2.2xlarge in the current launch template.
Create a new version of the launch template with the new instance type and update the Auto Scaling Group.
A media company needs to configure an Amazon S3 bucket to serve static assets for the public-facing web application. Which methods ensure that all of the objects uploaded to the S3 bucket can be read publicly all over the Internet? (Select TWO.)
Create an IAM role to set the objects inside the S3 bucket to public read.
Configure the S3 bucket policy to set all objects to public read.
Configure the cross-origin resource sharing (CORS) of the S3 bucket to allow objects to be publicly accessible from all domains.
Do nothing. Amazon S3 objects are already public by default.
Grant public read access to the object when uploading it using the S3 Console.
A media company needs to configure an Amazon S3 bucket to serve static assets for the public-facing web application. Which methods ensure that all of the objects uploaded to the S3 bucket can be read publicly all over the Internet? (Select TWO.)
Create an IAM role to set the objects inside the S3 bucket to public read.
Configure the S3 bucket policy to set all objects to public read.
Configure the cross-origin resource sharing (CORS) of the S3 bucket to allow objects to be publicly accessible from all domains.
Do nothing. Amazon S3 objects are already public by default.
Grant public read access to the object when uploading it using the S3 Console.
Configure the S3 bucket policy to set all objects to public read.
Grant public read access to the object when uploading it using the S3 Console.
A company has hundreds of VPCs with multiple VPN connections to their data centers spanning 5 AWS Regions. As the number of its workloads grows, the company must be able to scale its networks across multiple accounts and VPCs to keep up. A Solutions Architect is tasked to interconnect all of the company’s on-premises networks, VPNs, and VPCs into a single gateway, which includes support for inter-region peering across multiple AWS regions.
Which of the following is the BEST solution that the architect should set up to support the required interconnectivity?
Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection.
Set up an AWS Direct Connect Gateway to achieve inter-region VPC access to all of the AWS resources and on-premises data centers. Set up a link aggregation group (LAG) to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Launch a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.
Set up an AWS VPN CloudHub for inter-region VPC access and a Direct Connect gateway for the VPN connections to the on-premises data centers. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
Enable inter-region VPC peering that allows peering relationships to be established between multiple VPCs across different AWS regions. Set up a networking configuration that ensures that the traffic will always stay on the global AWS backbone and never traverse the public Internet.
Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection.
A leading IT consulting company has an application which processes a large stream of financial data by an Amazon ECS Cluster then stores the result to a DynamoDB table. You have to design a solution to detect new entries in the DynamoDB table then automatically trigger a Lambda function to run some tests to verify the processed data.
What solution can be easily implemented to alert the Lambda function of new entries while requiring minimal configuration change to your architecture?
Invoke the Lambda functions using SNS each time that the ECS Cluster successfully processed financial data.
Use Systems Manager Automation to detect new entries in the DynamoDB table then automatically invoke the Lambda function for processing.
Use CloudWatch Alarms to trigger the Lambda function whenever a new entry is created in the DynamoDB table.
Enable DynamoDB Streams to capture table activity and automatically trigger the Lambda function.
Enable DynamoDB streams to capture table activity and automatically trigger the lambda function
A company is using an On-Demand EC2 instance to host a legacy web application that uses an Amazon Instance Store-Backed AMI. The web application should be decommissioned as soon as possible and hence, you need to terminate the EC2 instance.
When the instance is terminated, what happens to the data on the root volume?
Data is automatically saved as an EBS snapshot.
Data is automatically saved as an EBS volume.
Data is automatically deleted.
Data is unavailable until the instance is restarted.
Data is automatically deleted.
A company conducts performance testing on a t3.large MySQL RDS DB instance twice a week. They use Performance Insights to analyze and fine-tune expensive queries. The company needs to reduce its operational expense in running the tests without compromising the tests’ integrity.
Which of the following is the most cost-effective solution?
Once the testing is completed, take a snapshot of the database and terminate it. Restore the database from the snapshot when necessary.
Stop the database once the test is done and restart it only when necessary.
Perform a mysqldump to get a copy of the database on a local machine. Use MySQL Workbench to analyze the queries.
Downgrade the database instance to t3.small.
Once the testing is completed, take a snapshot of the database and terminate it. Restore the database from the snapshot when necessary.
A popular augmented reality (AR) mobile game is heavily using a RESTful API which is hosted in AWS. The API uses Amazon API Gateway and a DynamoDB table with a preconfigured read and write capacity. Based on your systems monitoring, the DynamoDB table begins to throttle requests during high peak loads which causes the slow performance of the game.
Which of the following can you do to improve the performance of your app?
Create an SQS queue in front of the DynamoDB table.
Integrate an Application Load Balancer with your DynamoDB table.
Add the DynamoDB table to an Auto Scaling Group.
Use DynamoDB Auto Scaling
Use DynamoDB auto scaling
A company decided to change its third-party data analytics tool to a cheaper solution. They sent a full data export on a CSV file which contains all of their analytics information. You then save the CSV file to an S3 bucket for storage. Your manager asked you to do some validation on the provided data export.
In this scenario, what is the most cost-effective and easiest way to analyze export data using standard SQL?
Create a migration tool to load the CSV export file from S3 to a DynamoDB instance. Once the data has been loaded, run queries using DynamoDB.
Use mysqldump client utility to load the CSV export file from S3 to a MySQL RDS instance. Run some SQL queries once the data has been loaded to complete your validation.
To be able to run SQL queries, use Amazon Athena to analyze the export data file in S3.
Use a migration tool to load the CSV export file from S3 to a database that is designed for online analytic processing (OLAP) such as AWS RedShift. Run some queries once the data has been loaded to complete your validation.
To be able to run SQL queries, use Amazon Athena to analyze the export data file in S3.
A company has a global news website hosted in a fleet of EC2 Instances. Lately, the load on the website has increased which resulted in slower response time for the site visitors. This issue impacts the revenue of the company as some readers tend to leave the site if it does not load after 10 seconds.
Which of the below services in AWS can be used to solve this problem? (Select TWO.)
Use Amazon CloudFront with website as the custom origin.
For better read throughput, use AWS Storage Gateway to distribute the content across multiple regions.
Use Amazon ElastiCache for the website’s in-memory data store or cache.
Deploy the website to all regions in different VPCs for faster processing.
Use Amazon CloudFront with website as the custom origin.
Use Amazon ElastiCache for the website’s in-memory data store or cache.
A company needs to integrate the Lightweight Directory Access Protocol (LDAP) directory service from the on-premises data center to the AWS VPC using IAM. The identity store which is currently being used is not compatible with SAML.
Which of the following provides the most valid approach to implement the integration?
Develop an on-premises custom identity broker application and use STS to issue short-lived AWS credentials.
Use AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP.
Use an IAM policy that references the LDAP identifiers and AWS credentials.
Use IAM roles to rotate the IAM credentials whenever LDAP credentials are updated.
Develop an on-premises custom identity broker application and use STS to issue short-lived AWS credentials.
A startup is planning to set up and govern a secure, compliant, multi-account AWS environment in preparation for its upcoming projects. The IT Manager requires the solution to have a dashboard for continuous detection of policy non-conformance and non-compliant resources across the enterprise, as well as to comply with the AWS multi-account strategy best practices.
Which of the following offers the easiest way to fulfill this task?
Use AWS Organizations to build a landing zone to automatically provision new AWS accounts. Utilize the AWS Personal Health Dashboard to see provisioned accounts across your enterprise. Enable preventive and detective guardrails enabled for policy enforcement.
Launch new AWS member accounts using the AWS CloudFormation StackSets. Use AWS Config to continuously track the configuration changes and set rules to monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view
Use AWS Service Catalog to launch new AWS member accounts. Configure AWS Service Catalog Launch Constraints to continuously track configuration changes and monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view
Use AWS Control Tower to launch a landing zone to automatically provision and configure new accounts through an Account Factory. Utilize the AWS Control Tower dashboard to monitor provisioned accounts across your enterprise. Set up preventive and detective guardrails for policy enforcement.
An organization plans to use an AWS Direct Connect connection to establish a dedicated connection between its on-premises network and AWS. The organization needs to launch a fully managed solution that will automate and accelerate the replication of data to and from various AWS storage services.
Which of the following solutions would you recommend?
Use an AWS Storage Gateway tape gateway to store data on virtual tape cartridges and asynchronously copy your backups to AWS.
Use an AWS DataSync agent to rapidly move the data over the Internet.
Use an AWS DataSync agent to rapidly move the data over a service endpoint.
Use an AWS Storage Gateway file gateway to store and retrieve files directly using the SMB file system protocol.
Use an AWS DataSync agent to rapidly move the data over a service endpoint.
What is AWS DataSync?
Automate and accelerate the replication of data between your on-premises storage systems and AWS storage
A multinational bank is storing its confidential files in an S3 bucket. The security team recently performed an audit, and the report shows that multiple files have been uploaded without 256-bit Advanced Encryption Standard (AES) server-side encryption. For added protection, the encryption key must be automatically rotated every year. The solutions architect must ensure that there would be no other unencrypted files uploaded in the S3 bucket in the future.
Which of the following will meet these requirements with the LEAST operational overhead?
Create an S3 bucket policy that denies permissions to upload an object unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built-in key rotation feature of the SSE-S3 encryption keys.
Create a new customer-managed key (CMK) from the AWS Key Management Service (AWS KMS). Configure the default encryption behavior of the bucket to use the customer-managed key. Manually rotate the CMK each and every year.
Create an S3 bucket policy for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”:”aws:kms” header. Enable the S3 Object Lock in compliance mode for all objects to automatically rotate the built-in AES256 customer-managed key of the bucket.
Create a Service Control Policy (SCP) for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and modify the built-in key rotation feature of the SSE-S3 encryption keys to rotate the key yearly.
Create an S3 bucket policy that denies permissions to upload an object unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built-in key rotation feature of the SSE-S3 encryption keys.
A company launched a global news website that is deployed to AWS and is using MySQL RDS. The website has millions of viewers from all over the world, which means that the website has a read-heavy database workload. All database transactions must be ACID compliant to ensure data integrity.
In this scenario, which of the following is the best option to use to increase the read-throughput on the MySQL database?
Use SQS to queue up the requests
Enable Multi-AZ deployments
Enable Amazon RDS Standby Replicas
Enable Amazon RDS Read Replicas
Enable RDS Read Replicas
A food company bought 50 licenses of Windows Server to be used by the developers when launching Amazon EC2 instances to deploy and test applications. The developers are free to provision EC2 instances as long as there is a license available. The licenses are tied to the total CPU count of each virtual machine. The company wants to ensure that developers won’t be able to launch new instances once the licenses are exhausted. The company wants to receive notifications when all licenses are in use.
Which of the following options is the recommended solution to meet the company’s requirements?
Configure AWS Resource Access Manager (AWS RAM) to track and control the licenses used by AWS resources. Configure AWS RAM to provide available licenses for Amazon EC2 instances. Set up an Amazon SNS to send notifications and alerts once all licenses are used.
Upload the licenses on AWS Systems Manager Fleet Manager to be encrypted and distributed to Amazon EC2 instances. Attach an IAM role on the EC2 instances to request a license from the Fleet Manager. Set up an Amazon SNS to send notifications and alerts once all licenses are used
Define license configuration rules on AWS Certificate Manager to track and control license usage. Enable the option to “Enforce certificate limit” to prevent going over the number of allocated licenses. Add an Amazon SQS queue with ChangeVisibility Timeout configured to send notifications and alerts.
Define licensing rules on AWS License Manager to track and control license usage. Enable the option to “Enforce license limit” to prevent going over the number of allocated licenses. Add an Amazon SNS topic to send notifications and alerts.
Define licensing rules on AWS License Manager to track and control license usage. Enable the option to “Enforce license limit” to prevent going over the number of allocated licenses. Add an Amazon SNS topic to send notifications and alerts.
A company is looking to store their confidential financial files in AWS which are accessed every week. The Architect was instructed to set up the storage system which uses envelope encryption and automates key rotation. It should also provide an audit trail that shows who used the encryption key and by whom for security purposes.
Which combination of actions should the Architect implement to satisfy the requirement in the most cost-effective way? (Select TWO.)
Configure Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).
Configure Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3).
Configure Server-Side Encryption with Customer-Provided Keys (SSE-C).
Use Amazon S3 Glacier Deep Archive to store the data.
Use Amazon S3 to store the data.
Configure Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS).
Use Amazon S3 to store the data.
There is a new compliance rule in your company that audits every Windows and Linux EC2 instances each month to view any performance issues. They have more than a hundred EC2 instances running in production, and each must have a logging function that collects various system details regarding that instance. The SysOps team will periodically review these logs and analyze their contents using AWS Analytics tools, and the result will need to be retained in an S3 bucket.
In this scenario, what is the most efficient way to collect and analyze logs from the instances with minimal effort?
Install AWS Inspector Agent in each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
Install the AWS Systems Manager Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Install AWS SDK in each instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
A media company is using Amazon EC2, ELB, and S3 for its video-sharing portal for filmmakers. They are using a standard S3 storage class to store all high-quality videos that are frequently accessed only during the first three months of posting.
As a Solutions Architect, what should you do if the company needs to automatically transfer or archive media data from an S3 bucket to Glacier?
Use a custom shell script that transfers data from the S3 bucket to Glacier
Use Amazon SWF
Use Amazon SQS
Use Lifecycle Policies
Use Lifecycle Policies
What are AppSync pipeline resolvers
AppSync pipeline resolvers offer an elegant server-side solution to address the common challenge faced in web applications—aggregating data from multiple database tables. Instead of invoking multiple API calls across different data sources, which can degrade application performance and user experience, AppSync pipeline resolvers enable easy retrieval of data from multiple sources with just a single call. By leveraging Pipeline functions, these resolvers streamline the process of consolidating and presenting data to end-users.
AWS Run Command
AWS Systems Manager Run command lets you remotely and securely manage the configuration of your managed instances.
A company plans to use Route 53 instead of an ELB to load balance the incoming request to the web application. The system is deployed to two EC2 instances to which the traffic needs to be distributed. You want to set a specific percentage of traffic to go to each instance.
Which routing policy would you use?
Weighted
Failover
Latency
Geolocation
Weighted
An organization plans to run an application in a dedicated physical server that doesn’t use virtualization. The application data will be stored in a storage solution that uses an NFS protocol. To prevent data loss, you need to use a durable cloud storage service to store a copy of your data.
Which of the following is the most suitable solution to meet the requirement?
Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data.
Use an AWS Storage Gateway hardware appliance for your compute resources. Configure Volume Gateway to store the application data and backup data.
Use AWS Storage Gateway with a gateway VM appliance for your compute resources. Configure File Gateway to store the application data and backup data.
Use an AWS Storage Gateway hardware appliance for your compute resources. Configure Volume Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data.
Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data.
A company is running a batch job on an EC2 instance inside a private subnet. The instance gathers input data from an S3 bucket in the same region through a NAT Gateway. The company is looking for a solution that will reduce costs without imposing risks on redundancy or availability.
Which solution will accomplish this?
Re-assign the NAT Gateway to a lower EC2 instance type.
Deploy a Transit Gateway to peer connection between the instance and the S3 bucket.
Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance.
Replace the NAT Gateway with a NAT instance hosted on a burstable instance type.
Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance.
A social media company needs to capture the detailed information of all HTTP requests that went through their public-facing Application Load Balancer every five minutes. The client’s IP address and network latencies must also be tracked. They want to use this data for analyzing traffic patterns and for troubleshooting their Docker applications orchestrated by the Amazon ECS Anywhere service.
Which of the following options meets the customer requirements with the LEAST amount of overhead?
Install and run the AWS X-Ray daemon on the Amazon ECS cluster. Use the Amazon CloudWatch ServiceLens to analyze the traffic that goes through the application.
Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting.
Integrate Amazon EventBridge (Amazon CloudWatch Events) metrics on the Application Load Balancer to capture the client IP address. Use Amazon CloudWatch Container Insights to analyze traffic patterns.
Enable AWS CloudTrail for their Application Load Balancer. Use the AWS CloudTrail Lake to analyze and troubleshoot the application traffic.
Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting.
An e-commerce company’s Chief Information Security Officer (CISO) has taken necessary measures to ensure that sensitive customer data is secure in the cloud. However, the company recently discovered that some customer Personally Identifiable Information (PII) was mistakenly uploaded to an S3 bucket.
The company aims to rectify this mistake and prevent any similar incidents from happening again in the future. Additionally, the company would like to be notified if this error occurs again.
As the Solutions Architect, which combination of options should you implement in this scenario? (Select TWO.)
Identify sensitive data using Amazon Macie and create an Amazon EventBridge (Amazon CloudWatch Events) rule to capture the SensitiveData event type.
Set up an Amazon SNS topic as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the error occurs again.
Identify sensitive data using Amazon GuardDuty by creating an Amazon EventBridge (Amazon CloudWatch Events) rule to include the CRITICAL event types from GuardDuty findings.
Set up an Amazon SQS as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the error occurs again.
Set up an AWS IoT Message Broker as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the SensitiveData:S3Object/Personal event occurs again.
Identify sensitive data using Amazon Macie and create an Amazon EventBridge (Amazon CloudWatch Events) rule to capture the SensitiveData event type.
Set up an Amazon SNS topic as the target for an Amazon EventBridge (Amazon CloudWatch Events) rule that sends notifications when the error occurs again.
A company has a web application hosted on a fleet of EC2 instances located in two Availability Zones that are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is highly-available.
Which health checks will you implement?
ICMP health check
FTP health check
HTTP or HTTPS health check
TCP health check
HTTP or HTTPS
A Solutions Architect is migrating several Windows-based applications to AWS that require a scalable file system storage for high-performance computing (HPC). The storage service must have full support for the SMB protocol and Windows NTFS, Active Directory (AD) integration, and Distributed File System (DFS).
Which of the following is the MOST suitable storage service that the Architect should use to fulfill this scenario?
Amazon FSx for Lustre
Amazon FSx for Windows File Server
Amazon S3 Glacier Deep Archive
AWS DataSync
Amazon FSx for Windows File Server
A company has a web application hosted in their on-premises infrastructure that they want to migrate to AWS cloud. Your manager has instructed you to ensure that there is no downtime while the migration process is on-going. In order to achieve this, your team decided to divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure. Once the migration is over and the application works with no issues, a full diversion to AWS will be implemented. The company’s VPC is connected to its on-premises network via an AWS Direct Connect connection.
Which of the following are the possible solutions that you can implement to satisfy the above requirement? (Select TWO.)
Use a Network Load balancer with Weighted Target Groups to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.
Use Route 53 with Weighted routing policy to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.
Use an Application Elastic Load balancer with Weighted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.
Use Route 53 with Failover routing policy to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.
Use AWS Global Accelerator to divert and proportion the HTTP and HTTPS traffic between the on-premises and AWS-hosted application. Ensure that the on-premises network has an AnyCast static IP address and is connected to your VPC via a Direct Connect Gateway.
Use Route 53 with Weighted routing policy to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.
Use an Application Elastic Load balancer with Weighted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the application hosted in their on-premises infrastructure.
A leading media company has recently adopted a hybrid cloud architecture which requires them to migrate their application servers and databases in AWS. One of their applications requires a heterogeneous database migration in which you need to transform your on-premises Oracle database to PostgreSQL in AWS. This entails a schema and code transformation before the proper data migration starts.
Which of the following options is the most suitable approach to migrate the database in AWS?
Configure a Launch Template that automatically converts the source schema and code to match that of the target database. Then, use the AWS Database Migration Service to migrate data from the source database to the target database.
First, use the AWS Schema Conversion Tool to convert the source schema and application code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.
Use Amazon Neptune to convert the source schema and code to match that of the target database in RDS. Use the AWS Batch to effectively migrate the data from the source database to the target database in a batch process.
Heterogeneous database migration is not supported in AWS. You have to transform your database first to PostgreSQL and then migrate it to RDS.
First, use the AWS Schema Conversion Tool to convert the source schema and application code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.
An application is hosted in an On-Demand EC2 instance and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many others. As part of the upcoming IT audit, you need to ensure that all API calls to your AWS resources are logged and durably stored.
Which is the most suitable service that you should use to meet this requirement?
AWS X-Ray
Amazon CloudWatch
Amazon API Gateway
AWS CloudTrail
AWS Cloud Trail
Records AWS Management Console actions and API Calls
A financial company wants to store their data in Amazon S3 but at the same time, they want to store their frequently accessed data locally on their on-premises server. This is due to the fact that they do not have the option to extend their on-premises storage, which is why they are looking for a durable and scalable storage service to use in AWS.
What is the best solution for this scenario?
Use a fleet of EC2 instance with EBS volumes to store the commonly used data.
Use both Elasticache and S3 for frequently accessed data.
Use Amazon Glacier.
Use the Amazon Storage Gateway – Cached Volumes.
Use the Amazon Storage Gateway – Cached Volumes.
A company needs to accelerate the performance of its AI-powered medical diagnostic application by running its machine learning workloads on the edge of telecommunication carriers’ 5G networks. The application must be deployed to a Kubernetes cluster and have role-based access control (RBAC) access to IAM users and roles for cluster authentication.
Which of the following should the Solutions Architect implement to ensure single-digit millisecond latency for the application?
Host the application to an Amazon EKS cluster and run the Kubernetes pods on AWS Fargate. Create node groups in AWS Wavelength Zones for the Amazon EKS cluster. Add the EKS pod execution IAM role (AmazonEKSFargatePodExecutionRole) to your cluster and ensure that the Fargate profile has the same IAM role as your Amazon EC2 node groups.
Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create VPC endpoints for the AWS Wavelength Zones and apply them to the Amazon EKS cluster. Install the AWS IAM Authenticator for Kubernetes (aws-iam-authenticator) to your cluster.
Host the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Set up node groups in AWS Wavelength Zones for the Amazon EKS cluster. Attach the Amazon EKS connector agent role (AmazonECSConnectorAgentRole) to your cluster and use AWS Control Tower for RBAC access.
Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster.
Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster.
An e-commerce application is using a fanout messaging pattern for its order management system. For every order, it sends an Amazon SNS message to an SNS topic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronous processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the message. There was an incident that while an EC2 instance is currently processing a message, the instance was abruptly terminated, and the processing was not completed in time.
In this scenario, what happens to the SQS message?
When the message visibility timeout expires, the message becomes available for processing by other EC2 instances
The message will be sent to a Dead Letter Queue in AWS DataSync.
The message will automatically be assigned to the same EC2 instance when it comes back online within or after the visibility timeout.
The message is deleted and becomes duplicated in the SQS when the EC2 instance comes online.
When the message visibility timeout expires, the message becomes available for processing by other EC2 instances
A company has an On-Demand EC2 instance with an attached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnight at 12 AM when the instance is not used. One night, there has been a production incident where you need to perform a change on both the instance and on the EBS volume at the same time when the snapshot is currently taking place.
Which of the following scenario is true when it comes to the usage of an EBS volume while the snapshot is in progress?
The EBS volume can be used in read-only mode while the snapshot is in progress.
The EBS volume cannot be used until the snapshot completes.
The EBS volume can be used while the snapshot is in progress.
The EBS volume cannot be detached or attached to an EC2 instance until the snapshot completes
The EBS volume can be used while the snapshot is in progress.
A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs.
Which of the following is the most cost-effective option to use in implementing this architecture?
Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs.
Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.
Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs.
Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.
Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.
An On-Demand EC2 instance is launched into a VPC subnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group has an inbound rule to allow SSH from any IP address and does not have any outbound rules.
In this scenario, what are the changes needed to allow SSH connection to the instance?
The outbound security group needs to be modified to allow outbound traffic.
The network ACL needs to be modified to allow outbound traffic.
No action needed. It can already be accessed from any IP address using SSH.
Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.
The network ACL needs to be modified to allow outbound traffic
A Solutions Architect is working for a multinational telecommunications company. The IT Manager wants to consolidate their log streams including the access, application, and security logs in one single system. Once consolidated, the company will analyze these logs in real-time based on heuristics. There will be some time in the future where the company will need to validate heuristics, which requires going back to data samples extracted from the last 12 hours.
What is the best approach to meet this requirement?
First, configure Amazon Cloud Trail to receive custom logs and then use EMR to apply heuristics on the logs.
First, send all the log events to Amazon SQS then set up an Auto Scaling group of EC2 servers to consume the logs and finally, apply the heuristics.
First, send all of the log events to Amazon Kinesis then afterwards, develop a client process to apply heuristics on the logs.
First, set up an Auto Scaling group of EC2 servers then store the logs on Amazon S3 then finally, use EMR to apply heuristics on the logs
First, send all of the log events to Amazon Kinesis then afterwards, develop a client process to apply heuristics on the logs.
A data analytics startup is collecting clickstream data and stores them in an S3 bucket. You need to launch an AWS Lambda function to trigger the ETL jobs to run as soon as new data becomes available in Amazon S3.
Which of the following services can you use as an extract, transform, and load (ETL) service in this scenario?
Redshift Spectrum
AWS Glue
AWS Step Functions
S3 Select
AWS Glue
A company needs to use Amazon S3 to store irreproducible financial documents. For their quarterly reporting, the files are required to be retrieved after a period of 3 months. There will be some occasions when a surprise audit will be held, which requires access to the archived data that they need to present immediately.
What will you do to satisfy this requirement in a cost-effective way?
Use Amazon S3 Standard
Use Amazon Glacier Deep Archive
Use Amazon S3 Standard – Infrequent Access
Use Amazon S3 -Intelligent Tiering
Use Amazon S3 Standard – Infrequent Access
A company has multiple AWS Site-to-Site VPN connections placed between their VPCs and their remote network. During peak hours, many employees are experiencing slow connectivity issues, which limits their productivity. The company has asked a solutions architect to scale the throughput of the VPN connections.
Which solution should the architect carry out?
Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach additional VPN tunnels.
Modify the VPN configuration by increasing the number of tunnels to scale the throughput.
Add more virtual private gateways to a VPC and enable Equal Cost Multipath Routing (ECMR) to get higher VPN bandwidth.
Re-route some of the VPN connections to a secondary customer gateway device on the remote network’s end.
Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway and attach additional VPN tunnels.
A company has a web-based ticketing service that utilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS queue are configured to poll the queue as often as possible to keep end-to-end throughput as high as possible. The Solutions Architect noticed that polling the queue in tight loops is using unnecessary CPU cycles, resulting in increased operational costs due to empty responses.
In this scenario, what should the Solutions Architect do to make the system more cost-effective?
Configure Amazon SQS to use short polling by setting the ReceiveMessageWaitTimeSeconds to zero.
Configure Amazon SQS to use short polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero.
Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero.
Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to zero.
Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero.
A local bank has an in-house application that handles sensitive financial data in a private subnet. After the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other services.
How should you design this solution so that the data does not pass through the public Internet?
Create an Internet gateway in the public subnet with a corresponding route entry that directs the data to S3.
Configure a Transit gateway along with a corresponding route entry that directs the data to S3.
Provision a NAT gateway in the private subnet with a corresponding route entry that directs the data to S3.
Configure a VPC Endpoint along with a corresponding route entry that directs the data to S3.
Configure a VPC Endpoint along with a corresponding route entry that directs the data to S3.
A business plans to deploy an application on EC2 instances within an Amazon VPC and is considering adopting a Network Load Balancer to distribute incoming traffic among the instances. A solutions architect needs to suggest a solution that will enable the security team to inspect traffic entering and exiting their VPC.
Which approach satisfies the requirements?
Use the Network Access Analyzer service on the application’s VPC for inspecting ingress and egress traffic. Create a new Network Access Scope to filter and analyze all incoming and outgoing requests.
Enable Traffic Mirroring on the Network Load Balancer and forward traffic to the instances. Create a traffic mirror filter to inspect the ingress and egress of data that traverses your Amazon VPC.
Create a firewall using the AWS Network Firewall service at the VPC level then add custom rule groups for inspecting ingress and egress traffic. Update the necessary VPC route tables.
Create a firewall at the subnet level using the Amazon Detective service. Inspect the ingress and egress traffic using the VPC Reachability Analyzer.
Create a firewall using the AWS Network Firewall service at the VPC level then add custom rule groups for inspecting ingress and egress traffic. Update the necessary VPC route tables.