Practice Exam - 3 Flashcards
1. A company wants to run an application on AWS. The company plans to provision its application in Docker containers running in an Amazon ECS cluster. The application requires a MySQL database and the company plans to use Amazon RDS. What is the MOST cost-effective solution to meet these requirements?
- Creatine ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using On-Demand Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using Spot Instances.
- Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database using On-Demand Instances.
- Creatine ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using On-Demand Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using Spot Instances.
- Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database using On-Demand Instances.
A company has a requirement to store documents that will be accessed by a serverless application. The documents will be accessed frequently for the first 3 months, and rarely after that. The documents must be retained for 7 years. What is the MOST cost-effective solution to meet these requirements?
- Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.
- Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.
A global enterprise company is in the process of creating an infrastructure services platform for its users. The company has the following requirements:
· Centrally manage the creation of infrastructure services using a central AWS account.
· Distribute infrastructure services to multiple accounts in AWS Organizations.
· Follow the principle of least privilege to limit end users’ permissions for launching and managing applications.
Which combination of actions using AWS services will meet these requirements? (Select TWO.)
- Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the lAM users that require access to the S3 bucket policy.
- Allow lAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Grant lAM users AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organization’s SCP at the AWS account root user level to deny all services except AWS Cloud Formation and Amazon S3.
- Allow lAM users to have AWSServiceCatalog EndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Define the infrastructure services in AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
- Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the lAM users that require access to the S3 bucket policy.
- Allow lAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Grant lAM users AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organization’s SCP at the AWS account root user level to deny all services except AWS Cloud Formation and Amazon S3.
- Allow lAM users to have AWSServiceCatalog EndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Define the infrastructure services in AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
A database for an eCommerce website was deployed on an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The database was running performantly for several weeks until a peak shopping period when customers experienced slow performance and timeouts. Amazon CloudWatch metrics indicate that reads and writes to the DB instance were experiencing long response times. Metrics show that CPU utilization is <50%, plenty of available memory, and sufficient free storage space. There is no evidence of database connectivity issues in the application server logs.
What could be the root cause of database performance issues?
- The increased load resulted in the maximum number of allowed connections to the database instance.
- A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
- The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
- A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instances.
- The increased load resulted in the maximum number of allowed connections to the database instance.
- A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
- The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
- A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instances.
A company is using multiple AWS accounts. The company’s DNS records are stored in a private Amazon Route 53 hosted zone in the management account and their applications are running in a production account.
A Solutions Architect is attempting to deploy an application into the production account. The application must resolve a CNAME record set for an Amazon RDS endpoint. The CNAME record set was created in a private hosted zone in the management account.
The deployment failed to start and the Solutions Architect has discovered that the CNAME record is not resolvable on the application EC2 instance despite being correctly created in Route 53.
Which combination of steps should the Solutions Architect take to resolve this issue? (Select TWO.)
- Create a private hosted zone for the record set in the production account. Configure Route 53 replication between AWS accounts.
- Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account.
- Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
- Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
- Hardcode the DNS name and IP address of the RDS database instance into the /etc/resolv.conf file on the application server.
- Create a private hosted zone for the record set in the production account. Configure Route 53 replication between AWS accounts.
- Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account.
- Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
- Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
- Hardcode the DNS name and IP address of the RDS database instance into the /etc/resolv.conf file on the application server.
6. A new AWS Lambda function has been created to replicate objects that are received in an Amazon S3 bucket to several other S3 buckets in various AWS accounts. The Lambda function is triggered when an object creation event occurs in the main S3 bucket. A Solutions Architect is concerned that the function may impact other critical functions due to Lambda’s regional concurrency limit.
How can the solutions architect ensure the new Lambda function will not impact other critical Lambda functions?
- Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
- Configure Amazon S3 event notifications to publish events to an Amazon S3 queue in a different account. Create the Lambda function in the same account as the SQS queue and trigger the function when messages are published to the queue.
- Configure the reserved concurrency limit for the new Lamoda function. Monitor existing critical Lambda functions with (Correct) Amazon Cloud Watch alarms for the Throttles Lambda metric.
- Modify the execution timeout for the Lambda function to the maximum allowable value. Monitor existing critical Lambda functions with Amazon Cloud Watch alarms for the Throttles Lambda metric.
- Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
- Configure Amazon S3 event notifications to publish events to an Amazon S3 queue in a different account. Create the Lambda function in the same account as the SQS queue and trigger the function when messages are published to the queue.
- Configure the reserved concurrency limit for the new Lamoda function. Monitor existing critical Lambda functions with (Correct) Amazon Cloud Watch alarms for the Throttles Lambda metric.
- Modify the execution timeout for the Lambda function to the maximum allowable value. Monitor existing critical Lambda functions with Amazon Cloud Watch alarms for the Throttles Lambda metric.
A company has a mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application is write intensive and costs have recently increased significantly. The biggest increase in cost has been for the AWS Lambda functions. Application utilization is unpredictable but has been increasing steadily each month.
A Solutions Architect has noticed that the Lambda function execution time averages over 4 minutes. This is due to wait time for a high-latency network call to an on-premises MySQL database. A VPN is used to connect to the VPC.
How can the Solutions Architect reduce the cost of the current architecture?
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
- Enable API caching on API Gateway to reduce the number of Lambda function invocations.
- Enable Auto Scaling in DynamoDB.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Enable local caching in the mobile application to reduce the Lambda function invocation calls.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Cache the API Gateway results to Amazon CloudFront.
- Use Amazon EC2 Reserved Instances instead of Lambda.
- Enable Auto Scaling on EC2 and use Spot Instances during peak times.
- Enable DynamoDB Auto Scaling to manage target utilization.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
- Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
- Enable API caching on API Gateway to reduce the number of Lambda function invocations.
- Enable Auto Scaling in DynamoDB.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Enable local caching in the mobile application to reduce the Lambda function invocation calls.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Cache the API Gateway results to Amazon CloudFront.
- Use Amazon EC2 Reserved Instances instead of Lambda.
- Enable Auto Scaling on EC2 and use Spot Instances during peak times.
- Enable DynamoDB Auto Scaling to manage target utilization.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
- Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
A company has deployed an application that uses an Amazon DynamoDB table and the user base has increased significantly. Users have reported poor response times during busy periods but no error pages have been generated. The application uses Amazon DynamoDB in read-only mode. The operations team has determined that the issue relates to ProvisionedThroughputExceeded exceptions in the application logs when doing Scan and read operations.
A Solutions Architect has been tasked with improving application performance. Which solutions will meet these requirements whilst MINIMIZING changes to the application? (Select TWO.)
- Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
- Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
- Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
- Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
- Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
- Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
- Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
- Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
- Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
- Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
A company requires that only the master account in AWS Organizations is able to purchase Amazon EC2 Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved Instances.
Which solution will meet these requirements?
- Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP (Correct) to the root of the organization.
- Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP to the new OU.
- Create an OU for the master account and each member account. Move the accounts into their respective CUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedlnstancesOffering.
- Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.
- Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP (Correct) to the root of the organization.
- Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP to the new OU.
- Create an OU for the master account and each member account. Move the accounts into their respective CUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedlnstancesOffering.
- Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.
A company has deployed a SAML 2.0 federated identity solution with their on-premises identity provider (IdP) to authenticate users’ access to the AWS environment. A Solutions Architect ran authentication tests through the federated identity web portal and access to the AWS environment was granted. When a test user attempts to authenticate through the federated identity web portal, they are not able to access the AWS environment.
Which items should the solutions architect check to ensure identity federation is properly configured? (Select THREE.)
- The lAM users permissions policy has allowed the sts:AssumeRoleWithSAML API action allowed in their permissions policy.
- The AWS STS service has the on-premises ldP configured as an event source for authentication requests.
- The lAM users are providing the time-based one-time password (TOTP) codes required for authenticated access.
- The lAM roles created for the federated users or federated groups trust policy have set the SAML provider as the principal.
- The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the lAM role, and the SAML assertion from ldP.
- The company’s ldP defines SAML assertions that properly map users or groups in the company to lAM roles with appropriate permissions.
- The lAM users permissions policy has allowed the sts:AssumeRoleWithSAML API action allowed in their permissions policy.
- The AWS STS service has the on-premises ldP configured as an event source for authentication requests.
- The lAM users are providing the time-based one-time password (TOTP) codes required for authenticated access.
- The lAM roles created for the federated users or federated groups trust policy have set the SAML provider as the principal.
- The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the lAM role, and the SAML assertion from ldP.
- The company’s ldP defines SAML assertions that properly map users or groups in the company to lAM roles with appropriate permissions.
A company is migrating its on-premises systems to AWS. The computers consist of a combination of Windows and Linux virtual machines and physical servers. The company wants to be able to identify dependencies between on-premises systems and group systems together into applications to build migration plans. The company also needs to understand the performance requirements for systems so they can be right-sized.
How can these requirements be met?
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Connector to collect data for one week.
- Extract system information from an on-premises configuration management database (CM DB). Import the data directly into the Application Discovery Service.
- Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Connector to collect data for one week.
- Extract system information from an on-premises configuration management database (CM DB). Import the data directly into the Application Discovery Service.
- Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
A Solutions Architect is developing a mechanism to gain security approval for Amazon EC2 images (AMIs) so that they can be used by developers. The AMIs must go through an automated assessment process (CVE assessment) and be marked as approved before developers can use them. The approved images must be scanned every 30 days to ensure compliance.
Which combination of steps should the Solutions Architect take to meet these requirements while following best practices? (Select TWO.)
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances and use AWS Systems Manager Automation documents for remediation.
- Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the approved AMIs.
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager OrI Automation document on all EC2 instances every 30 days.
- Use Amazon Inspector to mount the CVE assessment package on the EC2 instances launched from the approved AMIs.
- Use AWS GuardDuty to run the CVE assessment package on the EC2 instances launched from the approved AMIs.
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances and use AWS Systems Manager Automation documents for remediation.
- Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the approved AMIs.
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager OrI Automation document on all EC2 instances every 30 days.
- Use Amazon Inspector to mount the CVE assessment package on the EC2 instances launched from the approved AMIs.
- Use AWS GuardDuty to run the CVE assessment package on the EC2 instances launched from the approved AMIs.
A company is designing an application that will require cross-Region disaster recovery with an RTO of less than 5 minutes and an RPO of less than 1 minute. The application tier DR solution has already been designed and a Solutions Architect must design the data recovery solution for the MySQL database tier.
How should the database tier be configured to meet the data recovery requirements?
- Use an Amazon RDS for MySQL instance with a Multi-AZ deployment.
- Create an Amazon RDS instance in the active Region and use a MySOL standby database on an Amazon EC2 instance in the failover Region.
- Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
- Use an Amazon RDS for MySQL instance with a cross-Region read replica in the failover Region.
- Use an Amazon RDS for MySQL instance with a Multi-AZ deployment.
- Create an Amazon RDS instance in the active Region and use a MySOL standby database on an Amazon EC2 instance in the failover Region.
- Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
- Use an Amazon RDS for MySQL instance with a cross-Region read replica in the failover Region.
A company runs hundreds of applications across several data centers and office locations. The applications include Windows and Linux operating systems, physical installations as well as virtualized servers, and MySQL and Oracle databases. There is no central configuration management database (CMDB) and existing documentation is incomplete and outdated. A Solutions Architect needs to understand the current environment and estimate the cloud resource costs after the migration.
Which tools or services should the Solutions Architect use to plan the cloud migration (Select THREE.)
- AWS Cloud Adoption Readiness Tool (CART)
- AWS Migration Hub
- AWS Application Discovery Service
- AWS Config
- AWS CloudWatch Logs
- AWS Server Migration Service
- AWS Cloud Adoption Readiness Tool (CART)
- AWS Migration Hub
- AWS Application Discovery Service
- AWS Config
- AWS CloudWatch Logs
- AWS Server Migration Service
An eCommerce company is running a promotional campaign and expects a large volume of user sign-ups on a web page that collects user information and preferences. The website runs on Amazon EC2 instances and uses an Amazon RDS for PostgreSQL DB instances. The volume of traffic is expected to be high and may be unpredictable with several spikes in activity. The traffic will result in a large number of database writes.
A solutions architect needs to build a solution that does not change the underlying data model and ensures that submissions are not dropped before they are committed to the database.
Which solution meets these requirements?
- Create an Amazon ElastiCache for Memcached cluster in front of the existing database instance to increase write performance.
- Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
- Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
- Use scheduled scaling to scale up the existing DB instance immediately before the event and then automatically scale down afterwards.
- Create an Amazon ElastiCache for Memcached cluster in front of the existing database instance to increase write performance.
- Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
- Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
- Use scheduled scaling to scale up the existing DB instance immediately before the event and then automatically scale down afterwards.
A financial services company receives a data feed from a credit card service provider. The feed consists of approximately 2,500 records that are sent every 10 minutes in plaintext and delivered over HTTPS to an encrypted S3 bucket. The data includes credit card data that must be automatically masked before sending the data to another S3 bucket for additional internal processing. There is also a requirement to remove and merge specific fields, and then transform the record into JSON format.
Which solutions will meet these requirements?
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SOS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETLjob to transform the entire record according to the (Correct) processing and transformation requirements. Define the output format as JSON. Once complete, have the ETLjob send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETLjob to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate task.
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SOS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETLjob to transform the entire record according to the (Correct) processing and transformation requirements. Define the output format as JSON. Once complete, have the ETLjob send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETLjob to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate
A solution is required for updating user metadata and will be initiated by a fleet of front-end web servers. The solution must be capable of scaling rapidly from hundreds to tens of thousands of jobs in less than a minute. The solution must be asynchronous and minimize costs.
Which solution should a Solutions Architect use to meet these requirements?
- Create an AWS CloudFormation stack that is updated by an AWS Lambda function. Configure the Lambda function to update the metadata.
- Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
- Create an Amazon EC2 Auto Scaling group of EC2 instances that pull messages from an Amazon SQS queue and process the user metadata updates. Configure the web application to send jobs to the queue.
- Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
- Create an AWS CloudFormation stack that is updated by an AWS Lambda function. Configure the Lambda function to update the metadata.
- Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
- Create an Amazon EC2 Auto Scaling group of EC2 instances that pull messages from an Amazon SQS queue and process the user metadata updates. Configure the web application to send jobs to the queue.
- Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
A company uses AWS Organizations. The company recently acquired a new business unit and invited the new unit’s existing account to the company’s organization. The organization uses a deny list SCP in the root of the organization and all accounts are members of a single OU named Production.
The administrators of the new business unit discovered that they are unable to access AWS Database Migration Service (DMS) to complete an in-progress migration.
Which option will temporarily allow administrators to access AWS DMS and complete the migration project?
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organizations deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
- Convert the organizations root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization root that allows AWS DMS actions for principals only in the new account.
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the new account to the Production OU when the migration project is complete.
- Remove the organization’s root SCPs that limit access to AWS DMS. Create an SCP that allows AWS DMS actions and apply the SCP to the Production OU.
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organizations deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
- Convert the organizations root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization root that allows AWS DMS actions for principals only in the new account.
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the new account to the Production OU when the migration project is complete.
- Remove the organization’s root SCPs that limit access to AWS DMS. Create an SCP that allows AWS DMS actions and apply the SCP to the Production OU.
A company is testing an application that collects data from sensors fitted to vehicles. The application collects usage statistics data every 4 minutes. The data is sent to Amazon API Gateway, it is then processed by an AWS Lambda function and the results are stored in an Amazon DynamoDB table.
As the sensors have been fitted to more vehicles, and as more metrics have been configured for collection, the Lambda function execution time has increased from a few seconds to over 2 minutes. There are also many TooManyRequestsException errors being generated by Lambda.
Which combination of changes will resolve these issues? (Select TWO.)
- Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
- Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
- Increase the CPU units assigned to the Lambda functions.
- Use Amazon EC? instead of Lambda to process the data.
- Increase the memory available to the Lambda functions.
- Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
- Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
- Increase the CPU units assigned to the Lambda functions.
- Use Amazon EC? instead of Lambda to process the data.
- Increase the memory available to the Lambda functions.
A Solutions Architect is designing a web application that will serve static content in an Amazon S3 bucket and dynamic content hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The application will use Amazon CloudFront and the solution should require that the content is available through CloudFront only.
Which combination of steps should the Solutions Architect take to restrict direct content access to CloudFront? (Select THREE.)
- Create a CloudFront Origin Access Identity (CAl) and add it to the CloudFront distribution. Update the S3 bucket policy to (Correct) allow access to the CAl only.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
- Configure CloudFront to add a custom header to requests that it sends to the origin.
- Configure the ALB to add a custom header to HTTP requests that are sent to the EC2 instances.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
- Configure an S3 bucket policy to allow access from the CloudFront IP addresses only.
- Create a CloudFront Origin Access Identity (CAl) and add it to the CloudFront distribution. Update the S3 bucket policy to (Correct) allow access to the CAl only.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
- Configure CloudFront to add a custom header to requests that it sends to the origin.
- Configure the ALB to add a custom header to HTTP requests that are sent to the EC2 instances.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
- Configure an S3 bucket policy to allow access from the CloudFront IP addresses only.
A company runs a data processing application on-premises and plans to move it to the AWS Cloud. Files are uploaded by users to a web application which then stores the files on an NFS-based storage system and places a message on a queue. The files are then processed from the queue and the results are returned to the user (and stored in long-term storage). This process can take up to 30 minutes. The processing times vary significantly and can be much higher during business hours.
What is the MOST cost-effective migration recommendation?
- Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in an Amazon S3 bucket.
- Create a queue using Amazon MOE Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS.
- Create a queue using Amazon SOS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC? Auto Scaling group to pull (C Æd) requests from the queue and process the files. Scale the EC2 or instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket
- Create a queue using Amazon MO. Run the web application on Amazon EC2 and configure it to publish to the new queue. Launch an Amazon EC2 instance from a preconfigured AMI to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS. Terminate the EC2 instance after the task is complete.
- Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in an Amazon S3 bucket.
- Create a queue using Amazon MOE Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS.
- Create a queue using Amazon SOS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC? Auto Scaling group to pull (C Æd) requests from the queue and process the files. Scale the EC2 or instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket
- Create a queue using Amazon MO. Run the web application on Amazon EC2 and configure it to publish to the new queue. Launch an Amazon EC2 instance from a preconfigured AMI to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS. Terminate the EC2 instance after the task is complete.
A new application that provides fitness and training advice has become extremely popular with thousands of new users from around the world. The web application is hosted on a fleet of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The content consists of static media files and different resources must be loaded depending on the client operating system.
Users have reported increasing latency for loading web pages and Amazon CloudWatch is showing high utilization of the EC2 instances.
Which set actions should a solutions architect take to improve response times?
- Create a separate ALB for each client operating system. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
- Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User- Agent HTTP header.
- Move content to Amazon 53. Create an Amazon CloudFront distribution to serve content out of the 53 buckets Use the User-Agent H1IP header to load different content.
- Create separate Auto Scaling groups based on dient operating systems. Switch to a Network Load Balancer (NIB). Use the User-Agent HTTP header in the NIB to route to a different set of EC2 instances.
- Create a separate ALB for each client operating system. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
- Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User- Agent HTTP header.
- Move content to Amazon 53. Create an Amazon CloudFront distribution to serve content out of the 53 buckets Use the User-Agent H1IP header to load different content.
- Create separate Auto Scaling groups based on dient operating systems. Switch to a Network Load Balancer (NIB). Use the User-Agent HTTP header in the NIB to route to a different set of EC2 instances.
A company includes several business units that each use a separate AWS account and a parent company AWS account. The company requires a single AWS bill across all AWS accounts with costs broken out for each business unit. The company also requires that services and features be restricted in the business unit accounts and this must be governed centrally.
Which combination of steps should a Solutions Architect take to meet these requirements? (Select TWO.)
- Use permissions boundaries applied to each business unit’s AWS account to define the maximum permissions available for services and features.
- Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization.
- Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.
- Enable consolidated billing in the parent accounts billing console and link the business unit AWS accounts.
- Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
- Use permissions boundaries applied to each business unit’s AWS account to define the maximum permissions available for services and features.
- Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization.
- Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.
- Enable consolidated billing in the parent accounts billing console and link the business unit AWS accounts.
- Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
A company is migrating an order processing application to the AWS Cloud. The usage patterns vary significantly but the application must be available at all times. Orders must be processed immediately and in the order that they are received. Which actions should a Solutions Architect take to meet these requirements?
- Use Amazon SOS with ElFO to queue messages in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SNS with ElFO to send orders in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
- Use Amazon SNS with FlEO to send orders in the correct order. Use a single large Reserved Instance for processing.
- Use Amazon SOS with ElFO to queue messages in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SNS with ElFO to send orders in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
- Use Amazon SNS with FlEO to send orders in the correct order. Use a single large Reserved Instance for processing.
An application consists of three tiers within a single Region. A Solutions Architect is designing a disaster recovery strategy that includes an RTO of 30 minutes and an RPO of 5 minutes for the data tier. Application tiers use Amazon EC2 instances and are stateless. The data tier consists of a 30TB Amazon Aurora database.
Which combination of steps satisfies the RTO and RPO requirements while optimizing costs? (Select TWO.)
- Create a cross-Region Aurora Replica of the database
- Deploy a hot standby of the application tiers to another Region
- Use AWS DMS to replicate the Aurora DB to an RDS database in another Region.
- Create snapshots of the Aurora database every 5 minutes.
- Create daily snapshots of the EC2 instances and replicate them to another Region.
- Create a cross-Region Aurora Replica of the database
- Deploy a hot standby of the application tiers to another Region
- Use AWS DMS to replicate the Aurora DB to an RDS database in another Region.
- Create snapshots of the Aurora database every 5 minutes.
- Create daily snapshots of the EC2 instances and replicate them to another Region.
A company is running a custom Java application on-premises and plans to migrate the application to the AWS Cloud. The application uses a MySQL database and the application servers maintain users’ sessions locally. Which combination of architecture changes will be required to create a highly available solution on AWS? (Select THREE.)
- Put the application instances in an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
- Move the Java content to an Amazon S3 bucket configured for static website hosting. Configure cross-Region replication for the S3 bucket contents.
- Migrate the database to Amazon RDS for MySQL Configure the RDS instance to use a Multi-AZ deployment.
- Configure the application to store the user’s session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances.
- Configure the application to run in multiple Regions. Use an Application Load Balancer to distribute the load between application instances.
- Migrate the database to Amazon EC2 instances in multiple Availability Zones. Configure Multi-AZ to synchronize the changes.
- Put the application instances in an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
- Move the Java content to an Amazon S3 bucket configured for static website hosting. Configure cross-Region replication for the S3 bucket contents.
- Migrate the database to Amazon RDS for MySQL Configure the RDS instance to use a Multi-AZ deployment.
- Configure the application to store the user’s session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances.
- Configure the application to run in multiple Regions. Use an Application Load Balancer to distribute the load between application instances.
- Migrate the database to Amazon EC2 instances in multiple Availability Zones. Configure Multi-AZ to synchronize the changes.
A company has an NFS file server on-premises with 50 TB of data that is being migrated to Amazon S3. The data is made up of many millions of small files and a Snowball Edge device is being used for the migration. A shell script is being used to copy data using the file interface of the Snowball Edge device. Data transfer times are very slow and the Solutions Architect suspects this may be related to the overhead of encrypting all the small files and copying them over the network.
What change should be made to improve data transfer times?
- Modify the shell script to ensure that individual files are being copied rather than directories.
- Connect directly to the USB interface on the Snowball Edge device and copy the files locally.
- Cluster two Snowball Edge devices together to increase the throughput of the devices.
- Perform multiple copy operations at one time by running each command from a separate terminal window, in separate (Correct) instances of the Snowball client.
- Modify the shell script to ensure that individual files are being copied rather than directories.
- Connect directly to the USB interface on the Snowball Edge device and copy the files locally.
- Cluster two Snowball Edge devices together to increase the throughput of the devices.
- Perform multiple copy operations at one time by running each command from a separate terminal window, in separate (Correct) instances of the Snowball client.
A Solutions Architect needs to design the architecture for an application that requires high availability within and across AWS Regions. The design must support failover to the second Region within 1 minute and must minimize the impact on the user experience. The application will include three tiers, the web tier, application tier and NoSQL data tier.
Which combination of steps will meet these requirements? (Select THREE.)
- Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
- Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
- Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
- Use an Amazon Aurora global database across both Regions so reads and writes can occur in either location.
- Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
- Use an Amazon Route 53 weighted routing policy set to loo/o across the two selected Regions. Set Time to Live (TTL) to 30 minutes.
- Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
- Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
- Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
- Use an Amazon Aurora global database across both Regions so reads and writes can occur in either location.
- Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
- Use an Amazon Route 53 weighted routing policy set to loo/o across the two selected Regions. Set Time to Live (TTL) to 30 minutes.
A company is using AWS CloudFormation templates for infrastructure provisioning. The templates are hosted in the company’s private GitHub repository. The company has experienced several issues with updates to the templates that have caused errors when executing the updates and creating the environment. A Solutions Architect must resolve these issues and implement automated testing of the CloudFormation template updates.
How can the Solutions Architect accomplish these requirements?
- Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeDeploy to create and execute a change set. Configure CodeDeploy to test the environment using testing scripts run by AWS CodeBuild.
- Use AWS CodePipeline to create and execute a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeDeploy. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
- Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeBuild to create and execute a change set from the templates in GitHub. Configure CodeBuild to test the deployment with testing scripts.
- Use AWS CodePipeline to create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
- Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeDeploy to create and execute a change set. Configure CodeDeploy to test the environment using testing scripts run by AWS CodeBuild.
- Use AWS CodePipeline to create and execute a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeDeploy. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
- Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeBuild to create and execute a change set from the templates in GitHub. Configure CodeBuild to test the deployment with testing scripts.
- Use AWS CodePipeline to create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
A Solution Architect used the AWS Application Discovery Service to gather information about some on-premises database servers. The tool discovered an Oracle data warehouse and several MySQL databases. The company plans to migrate to AWS and the Solutions Architect must determine the best migration pattern for each database.
Which combination of migration patterns will reduce licensing costs and operational overhead? (Select TWO.)
- Migrate the Oracle data warehouse to an Amazon ElastiCache for Redis cluster using AWS DMS.
- Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS.
- Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
- Lift and shift the Oracle data warehouse to Amazon EC2 using AWS Snowball.
- Lift and shift the MySQL databases to Amazon EC2 using AWS Snowball.
- Migrate the Oracle data warehouse to an Amazon ElastiCache for Redis cluster using AWS DMS.
- Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS.
- Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
- Lift and shift the Oracle data warehouse to Amazon EC2 using AWS Snowball.
- Lift and shift the MySQL databases to Amazon EC2 using AWS Snowball.
A developer is attempting to access an Amazon S3 bucket in a member account in AWS Organizations. The developer is logged in to the account with user credentials and has received an access denied error with no bucket listed. The developer should have read-only access to all buckets in the account.
A Solutions Architect has reviewed the permissions and found that the developer’s IAM user has been granted read-only access to all S3 buckets in the account.
Which additional steps should the Solutions Architect take to troubleshoot the issue? (Select TWO.)
- Check the ACLs for all S3 buckets.
- Check the bucket policies for all S3 buckets.
- Check for the permissions boundaries set for the lAM user.
- Check if an appropriate lAM role is attached to the lAM user.
- Check the SCPs set at the organizational units (OUs).
- Check the ACLs for all S3 buckets.
- Check the bucket policies for all S3 buckets.
- Check for the permissions boundaries set for the lAM user.
- Check if an appropriate lAM role is attached to the lAM user.
- Check the SCPs set at the organizational units (OUs).
A company is moving their IT infrastructure to the AWS Cloud and will have several Amazon VPCs across multiple Regions. The company requires centralized and controlled egress-only internet access. The solution must be highly available and horizontally scalable. The company is expecting to grow the number of VPCs to more than fifty.
A Solutions Architect is designing the network for the new cloud deployment. Which design pattern will meet the stated requirements?
- Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and attach the transit gateway.
- Attach each VPC to a centralized transit VPC with a VPN connection to each standalone VPC. Outbound internet traffic will be controlled by firewall appliances.
- Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
- Attach each VPC to a shared centralized VPC. Configure VPC peering between each VPC and the centralized VPC. Configure a NAT gateway in two AZs within the centralized VPC.
- Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and attach the transit gateway.
- Attach each VPC to a centralized transit VPC with a VPN connection to each standalone VPC. Outbound internet traffic will be controlled by firewall appliances.
- Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
- Attach each VPC to a shared centralized VPC. Configure VPC peering between each VPC and the centralized VPC. Configure a NAT gateway in two AZs within the centralized VPC.
A company provides a service that allows users to upload high-resolution product images using an app on their phones for a price matching service. The service currently uses Amazon S3 in the us-west-1 Region. The company has expanded to Europe and users in European countries are experiencing significant delays when uploading images.
Which combination of changes can a Solutions Architect make to improve the upload times for the images? (Select TWO.)
- Redeploy the application to use Amazon S3 multipart upload.
- Create an Amazon CloudFront distribution with the S3 bucket as an origin.
- Modify the Amazon S3 bucket to use Intelligent Tiering.
- Configure the client application to use byte-range fetches.
- Configure the S3 bucket to use S3 Transfer Acceleration.
- Redeploy the application to use Amazon S3 multipart upload.
- Create an Amazon CloudFront distribution with the S3 bucket as an origin.
- Modify the Amazon S3 bucket to use Intelligent Tiering.
- Configure the client application to use byte-range fetches.
- Configure the S3 bucket to use S3 Transfer Acceleration.
A company plans to build a gaming application in the AWS Cloud that will be used by Internet-based users. The application will run on a single instance and connections from users will be made over the UDP protocol. The company has requested that the service is implemented with a high level of security. A Solutions Architect has been asked to design a solution for the application on AWS.
Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)
- Use an Application Load Balancer (ALB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALBs internet facing a fully qualified domain name (FQDN).
- Enable AWS Shield Advanced on all public-facing resources.
- Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLBs Elastic IP address.
- Define an AWS WAF rule to explicitly drop non-UDP traffic and associate the rule with the load balancer.
- Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
- Use AWS Global Accelerator with an Elastic Load Balancer as an endpoint
- Use an Application Load Balancer (ALB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALBs internet facing a fully qualified domain name (FQDN).
- Enable AWS Shield Advanced on all public-facing resources.
- Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLBs Elastic IP address.
- Define an AWS WAF rule to explicitly drop non-UDP traffic and associate the rule with the load balancer.
- Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
- Use AWS Global Accelerator with an Elastic Load Balancer as an endpoint
A company has a large photo library stored on Amazon S3. They use AWS Lambda to extract metadata from the files according to various processing rules for different categories of photo. The output is then stored in an Amazon DynamoDB table.
The extraction process is performed whenever customer requests are submitted and can take up to 60 minutes to complete. The company wants to reduce the time taken to extract the metadata and has split the single Lambda function into separate Lambda functions for each category of photo.
Which additional steps should the Solutions Architect take to meet the requirements?
- Create an AWS Batch compute environment for each Lambda function. Configure an AWS Batch job queue for the computer environment. Create a Lambda function to retrieve a list of files and write each item to the job queue.
- Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.
- Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create another Step Functions workflow that retrieves a list of files and executes a metadata extraction workflow for each one.
- Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Configure or the SQS queue as an input to the Step Functions workflow.
- Create an AWS Batch compute environment for each Lambda function. Configure an AWS Batch job queue for the computer environment. Create a Lambda function to retrieve a list of files and write each item to the job queue.
- Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.
- Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create another Step Functions workflow that retrieves a list of files and executes a metadata extraction workflow for each one.
- Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Configure or the SQS queue as an input to the Step Functions workflow.
A company has deployed two Microsoft Active Directory Domain Controllers into an Amazon VPC with a default configuration. The DHCP options set associated with the VPC has been configured to assign the IP addresses of the Domain Controllers as DNS servers. A VPC interface endpoint has been created but EC2 instances within the VPC are unable to resolve the private endpoint addresses.
Which strategies could a Solutions Architect use to resolve the issue? (Select TWO.)
- Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
- Define an inbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
- Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
- Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
- Configure the DNS service on the EC2 instances in the VPC to use the VPC resolver server as the secondary DNS server.
- Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
- Define an inbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
- Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
- Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
- Configure the DNS service on the EC2 instances in the VPC to use the VPC resolver server as the secondary DNS server.
A company uses Amazon RedShift for analytics. Several teams deploy and manage their own RedShift clusters and management has requested that the costs for these clusters is better managed. The management team has set budgets and once the budgetary thresholds have been reached a notification should be sent to a distribution list for managers. Teams should be able to view their RedShift cluster’s expenses to date. A Solutions Architect needs to create a solution that ensures the policy is centrally enforced in a multi-account environment.
Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)
- Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold.
- Create an Amazon Cloud Watch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
- Install the unified Cloud Watch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached.
- Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.
- Update the AWS CloudFormation template to include the AWS:: Budgets::Budget::resource with the NotificationsWithSubscribers property.
- Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold.
- Create an Amazon Cloud Watch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
- Install the unified Cloud Watch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached.
- Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.
- Update the AWS CloudFormation template to include the AWS:: Budgets::Budget::resource with the NotificationsWithSubscribers property.
A company has deployed a new application into an Amazon VPC that does not have Internet access. The company has connected an AWS Direct Connection (DX) private VIF to the VPC and all communications will be over the DX connection. A new requirement states that all data in transit must be encrypted between users and the VPC.
Which strategy should a Solutions Architect use to maintain consistent network performance while meeting this new requirement?
- Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
- Create a client VPN endpoint and configure the users’ computers to use an AWS client VPN to connect to the VPC over the Internet.
- Create a new Site-to-Site VPN that connects to the VPC over the internet.
- Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
- Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
- Create a client VPN endpoint and configure the users’ computers to use an AWS client VPN to connect to the VPC over the Internet.
- Create a new Site-to-Site VPN that connects to the VPC over the internet.
- Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
An application runs on an Amazon EC2 instance with an attached Amazon EBS Provisioned IOPS (PIOPS) volume. The volume is configured at 200-GB in size and has 3,000 IOPS provisioned. The application requires low latency and random access to the data. A Solutions Architect has been asked to consider options for lowering the cost of the storage without impacting performance and durability.
What should the Solutions Architect recommend?
- Create an Amazon EFS file system with the throughput mode set to Provisioned. Mount the EFS file system to the EC2 operating system.
- Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
- Create an Amazon EFS file system with the performance mode set to Max I/O. Mount the EFS file system to the EC2 operating system.
- Change the PIOPS volume for a 1-TB Throughput Optimized HDD (st1) volume.
- Create an Amazon EFS file system with the throughput mode set to Provisioned. Mount the EFS file system to the EC2 operating system.
- Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
- Create an Amazon EFS file system with the performance mode set to Max I/O. Mount the EFS file system to the EC2 operating system.
- Change the PIOPS volume for a 1-TB Throughput Optimized HDD (st1) volume.
A company is deploying a web service that will provide read and write access to structured data. The company expects there to be variable usage patterns with some short but significant spikes. The service must dynamically scale and must be fault tolerant across multiple AWS Regions.
Which actions should a Solutions Architect take to meet these requirements?
- Store the data in Amazon DocumentDB in two Regions. Use AWS DMS to synchronize data between databases. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a failover routing policy.
- Store the data in Amazon S3 buckets in two Regions and configure cross Region replication. Create an Amazon CloudFront distribution that points to multiple origins. Use Amazon API Gateway and AWS Lambda for the web frontend and configure Amazon Route 53 with an alias record pointing to the REST API.
- Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
- Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a multi-value routing policy.
- Store the data in Amazon DocumentDB in two Regions. Use AWS DMS to synchronize data between databases. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a failover routing policy.
- Store the data in Amazon S3 buckets in two Regions and configure cross Region replication. Create an Amazon CloudFront distribution that points to multiple origins. Use Amazon API Gateway and AWS Lambda for the web frontend and configure Amazon Route 53 with an alias record pointing to the REST API.
- Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
- Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a multi-value routing policy.
A company recently noticed an increase in costs associated with Amazon EC2 instances and Amazon RDS databases. The company needs to be able to track the costs. The company uses AWS Organizations for all of their accounts. AWS CloudFormation is used for deploying infrastructure and all resources are tagged. The management team has requested that cost center numbers and project ID numbers are added to all future EC2 instances and RDS databases.
What is the MOST efficient strategy a Solutions Architect should follow to meet these requirements?
- Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.
- Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
- Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.
- Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.
- Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.
- Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
- Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.
- Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.
A company is planning to build a high-performance computing (HPC) solution in the AWS Cloud. The solution will include a 10-node cluster running Linux. High speed and low latency inter-instance connectivity is required to optimize the performance of the cluster.
Which combination of steps will meet these requirements? (Choose two.)
- Deploy instances across at least three Availability Zones.
- Deploy Amazon EC2 instances in a cluster placement group.
- Use Amazon EC2 instances that support burstable performance.
- Use Amazon EC2 instance types and AMIs that support EFA.
- Deploy Amazon EC2 instances in a partition placement group.
- Deploy instances across at least three Availability Zones.
- Deploy Amazon EC2 instances in a cluster placement group.
- Use Amazon EC2 instances that support burstable performance.
- Use Amazon EC2 instance types and AMIs that support EFA.
- Deploy Amazon EC2 instances in a partition placement group.
A company runs an application that generates user activity reports and stores them in an Amazon S3 bucket. Users are able to download the reports using the application which generates a signed URL. A user recently reported that the reports of other users can be accessed directly from the S3 bucket. A Solutions Architect reviewed the bucket permissions and discovered that public access is currently enabled.
How can the documents be protected from unauthorized access without modifying the application workflow?
- Use the Block Public Access feature in Amazon S3 to set the lgnorePublicAcls option to TRUE on the bucket.
- Configure server access logging and monitor the log files to check for unauthorized access.
- Modify the settings on the S3 bucket to enable default encryption for all objects.
- Use the Block Public Access feature in Amazon S3 to set the BlockPublicPolicy option to TRUE on the bucket.
- Use the Block Public Access feature in Amazon S3 to set the lgnorePublicAcls option to TRUE on the bucket.
- Configure server access logging and monitor the log files to check for unauthorized access.
- Modify the settings on the S3 bucket to enable default encryption for all objects.
- Use the Block Public Access feature in Amazon S3 to set the BlockPublicPolicy option to TRUE on the bucket.
A company has created a service that they would like a customer to access. The service runs in the company’s AWS account and the customer has a separate AWS account. The company would like to enable the customer to establish least privilege security access using an API or command line tool to the customer account.
What is the MOST secure way to enable the customer to access the service?
- The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN) when requesting access to perform the required tasks.
- The company should provide the customer with their AWS account access keys to log in and perform the required tasks.
- The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN), including the external ID in the lAM role’s trust policy, when requesting
- The company should create an lAM user and assign the required permissions to the lAM user. The company should then provide the credentials to the customer to login and perform the required tasks.
- The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN) when requesting access to perform the required tasks.
- The company should provide the customer with their AWS account access keys to log in and perform the required tasks.
- The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN), including the external ID in the lAM role’s trust policy, when requesting
- The company should create an lAM user and assign the required permissions to the lAM user. The company should then provide the credentials to the customer to login and perform the required tasks.
A company currently manages a fleet of Amazon EC2 instances running Windows and Linux in public and private subnets. The operations team currently connects over the Internet to manage the instances as there is no connection to the corporate network.
Security groups have been updated to allow the RDP and SSH protocols from any source IPv4 address. There have been reports of malicious attempts to access the resources as the company wishes to implement the most secure solution for managing the instances.
Which strategy should a Solutions Architect recommend?
- Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.
- Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.
- Deploy a server on the corporate network that can be used for managing EC2 instances. Update the security groups to allow connections over SSH and RDP from the on-premises management server only.
- Configure an IPSec Virtual Private Network (VPN) connecting the corporate network to the Amazon VPC. Update security groups to allow connections over SSH and RDP from the corporate network only.
- Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.
- Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.
- Deploy a server on the corporate network that can be used for managing EC2 instances. Update the security groups to allow connections over SSH and RDP from the on-premises management server only.
- Configure an IPSec Virtual Private Network (VPN) connecting the corporate network to the Amazon VPC. Update security groups to allow connections over SSH and RDP from the corporate network only.
A Solutions Architect is migrating an application to AWS Fargate. The task runs in a private subnet and does not have direct connectivity to the internet. When the Fargate task is launched, it fails with the following error:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection”
What should the Solutions Architect do to correct the error?
- Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
- Enable dual-stack in the Amazon ECS account settings and configure the network for the task to use awsvpc.
- Specify ENABLED for Auto-assign public IP when launching the task.
- Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a private subnet to route requests to the internet.
- Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
- Enable dual-stack in the Amazon ECS account settings and configure the network for the task to use awsvpc.
- Specify ENABLED for Auto-assign public IP when launching the task.
- Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a private subnet to route requests to the internet.
A Solutions Architect has deployed an application on Amazon EC2 instances in a private subnet behind a Network Load Balancer (NLB) in a public subnet. Customers have attempted to connect from their office location and are unable to access the application. Those targets were registered by instance-id and are all healthy in the associated target group.
What step should the Solutions Architect take to resolve the issue and enable access for the customers?
- Check the security group for the EC2 instances to ensure it allows ingress from the NLB subnets.
- Check the security group for the NLB to ensure it allows egress to the private subnet.
- Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
- Check the security group for the NLB to ensure it allows ingress from the customer office.
- Check the security group for the EC2 instances to ensure it allows ingress from the NLB subnets.
- Check the security group for the NLB to ensure it allows egress to the private subnet.
- Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
- Check the security group for the NLB to ensure it allows ingress from the customer office.
A serverless application is using AWS Lambda and Amazon DynamoDB and developers have finalized an update to the Lambda function code. AWS CodeDeploy will be used to deploy new versions of the function. Updates to the Lambda function should be delivered to a subset of users before deploying the changes to all users. The update process should also be easy to abort and rollback if necessary.
Which CodeDeploy configuration should the solutions architect use?
- A linear deployment
- A canary deployment
- An all-at-once deployment
- A blue/green deployment
- A linear deployment
- A canary deployment
- An all-at-once deployment
- A blue/green deployment
A company is planning to migrate an application from an on-premises data center to the AWS Cloud. The application consists of stateful servers and a separate MySQL database. The application is expected to receive significant traffic and must scale seamlessly. The solution design on AWS includes an Amazon Aurora MySQL database, Amazon EC2 Auto Scaling and Elastic Load Balancing.
A Solutions Architect needs to finalize the design for the solution. Which of the following configurations will ensure a consistent user experience and seamless scalability for both the application and database tiers?
- Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to round_robin.
- Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
- Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
- Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
- Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to round_robin.
- Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
- Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
- Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
A company uses multiple AWS accounts. There are separate accounts for development, staging, and production environments. Some new requirements have been issued to control costs and improve the overall governance of the AWS accounts. The company must be able to calculate costs associated with each project and each environment. Commonly deployed IT services must be centrally managed and business units should be restricted to deploying pre-approved IT services only.
Which combination of actions should be taken to meet these requirements? (Select TWO.)
- Use AWS Savings Plans to configure budget thresholds and send alerts to management.
- Apply environment, cost center, and application name tags to all resources that accept tags.
- Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded.
- Configure custom budgets and define thresholds using AWS Cost Explorer.
- Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.
- Use AWS Savings Plans to configure budget thresholds and send alerts to management.
- Apply environment, cost center, and application name tags to all resources that accept tags.
- Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded.
- Configure custom budgets and define thresholds using AWS Cost Explorer.
- Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.
A Solutions Architect has been asked to implement a disaster recovery (DR) site for an eCommerce platform that is growing at an increasing rate. The platform runs on Amazon EC2 web servers behind Elastic Load Balancers, images stored in Amazon S3 and Amazon DynamoDB tables that store product and customer data. The DR site should be located in a separate AWS Region.
Which combinations of actions should the Solutions Architect take to implement the DR site? (Select THREE.)
- Enable versioning on the amazon S3 buckets and enable cross-Region snapshots.
- Enable DynamoDB global tables to achieve multi-Region table replication.
- Enable Amazon Route S3 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
- Enable Amazon S3 cross-Region replication on the buckets that contain images.
- Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
- Enable DynamoDB Streams and use an event-source mapping to a Lambda function which populates a table in the second Region.
- Enable versioning on the amazon S3 buckets and enable cross-Region snapshots.
- Enable DynamoDB global tables to achieve multi-Region table replication.
- Enable Amazon Route S3 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
- Enable Amazon S3 cross-Region replication on the buckets that contain images.
- Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
- Enable DynamoDB Streams and use an event-source mapping to a Lambda function which populates a table in the second Region.
A financial company processes transactions using on-premises application servers which save output to an Amazon DynamoDB table. The company’s data center is connected to AWS using an AWS Direct Connect (DX) connection. Company management has mandated that the solution should be available across multiple Regions. Consistent network performance must be maintained at all times.
What changes should the company make to meet these requirements?
- Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
- Create a DX connection to a second AWS Region. Create an identical DynamoDB table in the second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region.
- Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS DMS to synchronize data to the copied table.
- Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS Lambda to synchronize data to the copied table.
- Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
- Create a DX connection to a second AWS Region. Create an identical DynamoDB table in the second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region.
- Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS DMS to synchronize data to the copied table.
- Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS Lambda to synchronize data to the copied table.
A Solutions Architect is helping to standardize a company’s method of deploying applications to AWS using AWS CodePipeline and AWS CloudFormation. A group of developers create applications using JavaScript and TypeScript and they are concerned about needing to learn new domain-specific languages. They are also reluctant to lose access to features of the existing languages such as looping.
How can the Solutions Architect address the developers concerns and quickly bring the applications up to deployment standards?
- Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developer’s code and use the AWS CDK to create Cloud Formation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
- Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes. Orchestrate the CodeBuild job using CodePipeline and use CloudFormation for deployment.
- Use AWS SAM and specify a serverless transform. Add the JavaScript and Typescript code as metadata to the template file Use AWS CodeBuild to build the code and output a CloudFormation template.
- Create CloudFormation templates and re-use para of the JavaScript and Typescript code as Instance user data. Use the AWS Cloud Development Kit (AWS CDK) to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.
- Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developer’s code and use the AWS CDK to create Cloud Formation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
- Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes. Orchestrate the CodeBuild job using CodePipeline and use CloudFormation for deployment.
- Use AWS SAM and specify a serverless transform. Add the JavaScript and Typescript code as metadata to the template file Use AWS CodeBuild to build the code and output a CloudFormation template.
- Create CloudFormation templates and re-use para of the JavaScript and Typescript code as Instance user data. Use the AWS Cloud Development Kit (AWS CDK) to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.
A company runs its IT services from an on-premises data center and is moving to AWS. The company wants to move their development and deployment processes to use managed services where possible. They would like to leverage their existing Chef tools and experience. The application must be deployed to a staging environment and then to production. The ability to roll back quickly must be available in case issues occur following a production deployment.
Which AWS service and deployment strategy should a Solutions Architect use to meet the company’s requirements?
- Use AWS OpsWorks and deploy the application using a canary deployment strategy.
- Use AWS CodeDeploy and deploy the application using an in-place update deployment strategy.
- Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
- Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy.
- Use AWS OpsWorks and deploy the application using a canary deployment strategy.
- Use AWS CodeDeploy and deploy the application using an in-place update deployment strategy.
- Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
- Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy.
A company has experienced issues updating an AWS Lambda function that is deployed using an AWS CloudFormation stack. The issues have resulted in outages that affected large numbers of customers. A Solutions Architect must adjust the deployment process to support a canary release strategy. Invocation traffic should be routed based on specified weights.
Which solution will meet these requirements?
- Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
- Use AWS CodeDeploy to deploy using the CodeDeployDefault.HalfAtATime deployment configuration to distribute the load.
- Create an alias for new versions of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
- Create a version for every new update to the Lambda function code. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.
- Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
- Use AWS CodeDeploy to deploy using the CodeDeployDefault.HalfAtATime deployment configuration to distribute the load.
- Create an alias for new versions of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
- Create a version for every new update to the Lambda function code. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.
A fintech company runs an on-premises environment that ingests data feeds from financial services companies, transforms the data, and then sends it to an on-premises Apache Kafka cluster. The company plans to use AWS services to build a scalable, near real-time solution that offers consistent network performance to provide the data feeds to a web application. Which steps should a Solutions Architect take to build the solution? (Select THREE.)
- Establish a Site-to-Site VPN from the on-premises data center to AWS.
- Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
- Establish an AWS Direct Connect connection from the on premises data center to AWS.
- Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
- Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream.
- Create an Amazon EC? Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream.
- Establish a Site-to-Site VPN from the on-premises data center to AWS.
- Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
- Establish an AWS Direct Connect connection from the on premises data center to AWS.
- Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
- Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream.
- Create an Amazon EC? Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream.
A new application will ingest millions of records per minute from user devices all over the world. Each record is less than 4 KB in size and must be stored durably and accessed with low latency. The data must be stored for 90 days after which it can be deleted. It has been estimated that storage requirements for a year will be 15-20TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?
- Store each incoming record as a single .csv file in an Amazon S3 bucket. Configure a lifecycle policy to delete data older than 90 days.
- Store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 90 days.
- Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
- Store the records in an Amazon Kinesis Data Stream. Configure the Time to Live (TTL) feature to delete records older than 90 days.
- Store each incoming record as a single .csv file in an Amazon S3 bucket. Configure a lifecycle policy to delete data older than 90 days.
- Store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 90 days.
- Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
- Store the records in an Amazon Kinesis Data Stream. Configure the Time to Live (TTL) feature to delete records older than 90 days.
A company has deployed a high performance computing (HPC) cluster in an Amazon VPC. The cluster runs a tightly coupled workload that generates a large number of shared files that are stored in an Amazon EFS file system. The cluster has grown to over 800 instances and the performance has degraded to a problematic level.
A Solutions Architect needs to make some changes to the design to improve the overall performance. Which of the following changes should the Solutions Architect make? (Select THREE.)
- Enable an Elastic Fabric Adapter (EFA) on a supported EC2 instance type.
- Attach multiple elastic network interfaces (ENI) to reduce latency.
- Ensure the cluster is launched across multiple Availability Zones.
- Replace Amazon EFS with Amazon FSx for Lustre.
- Ensure the HPC duster is launched within a single Availability Zone.
- Replace Amazon EFS with multiple FXs for Windows File Server.
- Enable an Elastic Fabric Adapter (EFA) on a supported EC2 instance type.
- Attach multiple elastic network interfaces (ENI) to reduce latency.
- Ensure the cluster is launched across multiple Availability Zones.
- Replace Amazon EFS with Amazon FSx for Lustre.
- Ensure the HPC duster is launched within a single Availability Zone.
- Replace Amazon EFS with multiple FXs for Windows File Server.
A company offers a photo sharing application to its users through a social networking app. To ensure images can be displayed with consistency, a single Amazon EC2 instance running JavaScript code processes the photos and stores the processed images in an Amazon S3 bucket. A front-end application runs from a static website in another S3 bucket and loads the processed images for display in the app.
The company has asked a Solutions Architect to make some recommendations for a cost-effective solution that offers massive scalability for a global user base.
Which combination of changes should the Solutions Architect recommend? (Select TWO.)
- Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
- Place the image processing EC2 instance into an Auto Scaling group.
- Create an Amazon CloudFront distribution in front of the processed images bucket
- Replace the EC2 instance with AWS Lambda to run the image processing tasks.
- Replace the EC2 instance with Amazon Rekognition for image processing.
- Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
- Place the image processing EC2 instance into an Auto Scaling group.
- Create an Amazon CloudFront distribution in front of the processed images bucket
- Replace the EC2 instance with AWS Lambda to run the image processing tasks.
- Replace the EC2 instance with Amazon Rekognition for image processing.
A company requires federated access to AWS for users of a mobile application. The security team has mandated that the application must use a custom-built solution for authenticating users and use IAM roles for authorization.
Which of the following actions would enable authentication and authorization and satisfy the requirements? (Select TWO.)
- Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the lAM identity provider.
- Use a custom-built SAML-compatible solution for authentication and use AWS SSO for authorization.
- Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
- Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Use a token-based Lambda authorizer that uses JWT.
- Use a custom-built OpeniD Connect-compatible solution with AWS SSO for authentication and authorization.
- Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the lAM identity provider.
- Use a custom-built SAML-compatible solution for authentication and use AWS SSO for authorization.
- Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
- Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Use a token-based Lambda authorizer that uses JWT.
- Use a custom-built OpeniD Connect-compatible solution with AWS SSO for authentication and authorization.