Timed Mode Set 1 Flashcards
An accounting firm hosts a mix of Windows and Linux Amazon EC2 instances in its AWS account. The solutions architect has been tasked to conduct a monthly performance check on all production instances. There are more than 200 On-Demand EC2 instances running in their production environment and it is required to ensure that each instance has a logging feature that collects various system details such as memory usage, disk space, and other metrics. The system logs will be analyzed using AWS Analytics tools and the results will be stored in an S3 bucket.
Which of the following is the most efficient way to collect and analyze logs from the instances with minimal effort?
Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Enable the Traffic Mirroring feature and install AWS CDK on each On-Demand EC2 instance. Create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Set up CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
Set up and install the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Set up and install AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
A private bank is hosting a secure web application that allows its agents to view highly sensitive information about the clients. The amount of traffic that the web app will receive is known and not expected to fluctuate. An SSL will be used as part of the application’s data security. The chief information security officer (CISO) is concerned about the security of the SSL private key. The CISO wants to ensure that the key cannot be accidentally or intentionally moved outside the corporate environment. The solutions architect is also concerned that the application logs might contain some sensitive information. The EBS volumes used to store the data are already encrypted. In this scenario, the application logs must be stored securely and durably so that they can only be decrypted by authorized employees.
Which of the following is the most suitable and highly available architecture that can meet all of the requirements?
Distribute traffic to a set of web servers using an Elastic Load Balancer. Use TCP load balancing for the load balancer and configure your web servers to retrieve the SSL private key from a private Amazon S3 bucket on boot. Use another private Amazon S3 bucket to store your web server logs using Amazon S3 server-side encryption.
Distribute traffic to a set of web servers using an Elastic Load Balancer. To secure the SSL private key, upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Lastly, write your application logs to an instance store volume that has been encrypted using a randomly generated AES key.
Distribute traffic to a set of web servers using an Elastic Load Balancer that performs TCP load balancing. Use an AWS CloudHSM to perform the SSL transactions and deliver your application logs to a private Amazon S3 bucket using server-side encryption.
Distribute traffic to a set of web servers using an Elastic Load Balancer that performs TCP load balancing. Use CloudHSM deployed to two Availability Zones to perform the SSL transactions and deliver your application logs to a private Amazon S3 bucket using server-side encryption.
A company is hosting a multi-tier web application in AWS. It is composed of an Application Load Balancer and EC2 instances across three Availability Zones. During peak load, its stateless web servers operate at 95% utilization. The system is set up to use Reserved Instances to handle the steady-state load and On-Demand Instances to handle the peak load. Your manager instructed you to review the current architecture and do the necessary changes to improve the system.
Which of the following provides the most cost-effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load?
Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current setup for handling the steady state load.
Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.
Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current setup for handling the steady state load.
Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.
A company has created multiple accounts in AWS to support the rapid growth of its cloud services. The multiple accounts are used to separate their various departments such as finance, human resources, engineering, and many others. Each account is managed by a Systems Administrator which has root access for that specific account only. There is a requirement to centrally manage policies across multiple AWS accounts by allowing or denying particular AWS services for individual accounts, or for groups of accounts.
Which is the most suitable solution that you should implement with the LEAST amount of complexity?
Use AWS Organizations and Service Control Policies to control the list of AWS services that can be used by each member account.
Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider.
Connect all departments by setting up cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.
Set up AWS Organizations and Organizational Units (OU) to connect all AWS accounts of each department. Create a custom IAM Policy to allow or deny the use of certain AWS services for each account.
A company has several IoT enabled devices and sells them to customers around the globe. Every 5 minutes, each IoT device sends back a data file that includes the device status and other information to an Amazon S3 bucket. Every midnight, a Python cron job runs from an Amazon EC2 instance to read and process each data file on the S3 bucket and loads the values on a designated Amazon RDS database. The cron job takes about 10 minutes to process a day’s worth of data. After each data file is processed, it is eventually deleted from the S3 bucket. The company wants to expedite the process and access the processed data on the Amazon RDS as soon as possible.
Which of the following actions would you implement to achieve this requirement with the LEAST amount of effort?
Increase the Amazon EC2 instance size and spawn more instances to speed up the processing of the data files. Set the Python script cron job schedule to a 1-minute interval to further improve the access time.
Convert the Python script cron job to an AWS Lambda function. Configure AWS CloudTrail to log data events of the Amazon S3 bucket. Set up an Amazon EventBridge rule to trigger the Lambda function whenever an upload event on the S3 bucket occurs.
Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucket.
Convert the Python script cron job to an AWS Lambda function. Create an Amazon EventBridge rule scheduled at 1-minute intervals and trigger the Lambda function. Create parallel CloudWatch rules that trigger the same Lambda function to further reduce the processing time.
An enterprise runs its CMS application on an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer. The instances are placed on private subnets while the ALB is placed on public subnets. As part of best practices, the AWS Systems Manager Agent is installed on the instances and the AWS Systems Manager Session Manager is used to login into the instances. The EC2 instances send application logs to Amazon CloudWatch Logs.
Upon the deployment of the new application version, the new instances are being marked as unhealthy by the ALB and are being replaced by the Auto Scaling group. For troubleshooting, the solutions architect tries to log in on the unhealthy instances but the instances are getting terminated. The collected logs on CloudWatch Logs do not show definitive errors in the application.
Which of the following options is the quickest way for the solutions architect to troubleshoot the problem?
Go to the Auto Scaling Groups section in the AWS console and suspend the “Terminate” process for the ASG. Log in to one of the unhealthy instances using AWS Systems Manager Session Manager.
Create a temporary Amazon EC2 instance and deploy the new application version. Login to the EC2 instance using the SSH key. Add the instance to the Application Load Balancer to inspect application logs in real-time.
Select one of the new Amazon EC2 instances and enable EC2 instance termination protection. Gain access to the unhealthy instance using the AWS Systems Manager Session Manager.
Update the application log setting to have more verbose logging to capture more application logs. Ensure that the Amazon CloudWatch agent is installed and running on the instances.
A fintech startup has developed a cloud-based payment processing system that accepts credit card payments as well as cryptocurrencies such as Bitcoin, Ripple, and the likes. The system is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to process the payments. Since they are accepting credit card information from the users, they are required to be compliant with the Payment Card Industry Data Security Standard (PCI DSS). On the recent 3rd-party audit, it was found that the credit card numbers are not properly encrypted and hence, their system failed the PCI DSS compliance test. You were hired by the fintech startup to solve this issue so they can release the product in the market as soon as possible. In addition, you also have to improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content.
In this scenario, what is the best option to protect and encrypt the sensitive credit card information of the users and to improve the cache hit ratio of your CloudFront distribution?
Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.
Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio.
Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.
Create an origin access control (OAC) and add it to the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio.
An innovative Business Process Outsourcing (BPO) startup is planning to launch a scalable and cost-effective call center system using AWS. The system should be able to receive inbound calls from thousands of customers and generate user contact flows. Callers must have the capability to perform basic tasks such as changing their password or checking their balance without them having to speak to a call center agent. It should also have advanced deep learning functionalities such as automatic speech recognition (ASR) to achieve highly engaging user experiences and lifelike conversational interactions. A feature that allows the solution to query other business applications and send relevant data back to callers must also be implemented.
Which of the following is the MOST suitable solution that the Solutions Architect should implement?
Set up a cloud-based contact center using the Amazon Connect service. Create a conversational chatbot using Amazon Lex with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with Amazon Connect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.
Set up a cloud-based contact center using the AWS Ground Station service. Create a conversational chatbot using Amazon Alexa for Business with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with AWS Ground Station. Connect the solution to various business applications and other internal systems using AWS Lambda functions.
Set up a cloud-based contact center using the AWS Elemental MediaConnect service. Create a conversational chatbot using Amazon Polly with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with AWS Elemental MediaConnect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.
Set up a cloud-based contact center using the Amazon Connect service. Create a conversational chatbot using Amazon Comprehend with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with Amazon Connect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.
A company hosts its multi-tiered web application on a fleet of Auto Scaling EC2 instances spread across two Availability Zones. The Application Load Balancer is in the public subnets and the Amazon EC2 instances are in the private subnets. After a few weeks of operations, the users are reporting that the web application is not working properly. Upon testing, the Solutions Architect found that the website is accessible and the login is successful. However, when the “find a nearby store” function is clicked on the website, the map loads only about 50% of the time when the page is refreshed. This function involves a third-party RESTful API call to a maps provider. Amazon EC2 NAT instances are used for these outbound API calls.
Which of the following options are the MOST likely reason for this failure and the recommended solution?
This error is caused by an overloaded NAT instance in one of the subnets. Scale the EC2 NAT instances to larger-sized instances to ensure that they can handle the growing traffic.
This error is caused by failed NAT instance in one of the public subnets. Use NAT Gateways instead of EC2 NAT instances to ensure availability and scalability.
The error is caused by a failure in one of their availability zones in the VPC of the third-party provider. Contact the third-party provider support hotline and request for them to fix it.
One of the subnets in the VPC has a misconfigured Network ACL that blocks outbound traffic to the third-party provider. Update the network ACL to allow this connection and configure IAM permissions to restrict these changes in the future.
A company runs hundreds of Windows-based Amazon EC2 instances on AWS. The Solutions Architect has been assigned to develop a workflow to ensure that the required patches of all Windows EC2 instances are properly identified and applied automatically. To maintain their system uptime requirements, it is of utmost importance to ensure that the EC2 instance reboots do not occur at the same time on all of their Windows instances. This is to avoid any loss of revenue that could be caused by any unavailability issues of their systems.
Which of the following will meet the above requirements?
Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create two CloudWatch Events rules which are configured to use a cron expression to automate the execution of patching for the two Patch Groups using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution.
Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on your patch group. Set up a maintenance window and associate it with your patch group. Assign the AWS-RunPatchBaseline document as a task within your maintenance window.
Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up two non-overlapping maintenance windows and associate each with a different patch group. Using Patch Group tags, register targets with specific maintenance windows and lastly, assign the AWS-RunPatchBaseline document as a task within each maintenance window which has a different processing start time.
Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create a CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a given schedule using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution.
A company plans to decommission its legacy web application that is hosted in AWS. It is composed of an Auto Scaling group of EC2 instances and an Application Load Balancer (ALB). The new application is built on a new framework. The solutions architect has been tasked to set up a new serverless architecture that comprises AWS Lambda, API Gateway, and DynamoDB. In addition, it is required to build a CI/CD pipeline to automate the build process and to support gradual deployments.
Which is the most suitable way to build, test, and deploy the new architecture in AWS?
Use the AWS Serverless Application Repository to organize related components, share configuration such as memory and timeouts between resources, and deploy all related resources together as a single, versioned entity.
Set up a CI/CD pipeline using CodeCommit, CodeBuild, CodeDeploy, and CodePipeline to build the CI/CD pipeline then use AWS Systems Manager Automation to automate the build process and support gradual deployments.
Use AWS Elastic Beanstalk to deploy the application.
Use AWS Serverless Application Model (AWS SAM) and set up AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline to build a CI/CD pipeline.
Four large banks in the country have collaborated to create a secure, simple-to-use, mobile payment app that enables users to easily transfer money and pay bills without much hassle. With the new mobile payment app, anyone can easily pay another person, split the bill with their friends, or pay for their coffee in an instant with just a few taps in the app. The payment app is available on both Android and iOS devices, including a web portal that is deployed in AWS using OpsWorks Stacks and EC2 instances. It was a big success with over 5 million users nationwide and has over 1000 transactions every hour. After one year, a new feature that will enable the users to store their credit card information in the app is ready to be added to the existing web portal. However, due to PCI-DSS compliance, the new version of the APIs and web portal cannot be deployed to the existing application stack.
How would the solutions architect deploy the new web portal for the mobile app without having any impact on 5 million users?
Create a new stack that contains the latest version of the web portal. Using Route 53 service, direct all the incoming traffic to the new stack at once so that all the customers get to access new features.
Deploy the new web portal using a Blue/Green deployment strategy with AWS CodeDeploy and Lambda in which the green environment represents the current web portal version serving production traffic while the blue environment is staged in running a different version of the web portal.
Deploy a new OpsWorks stack that contains a new layer with the latest web portal version. Shift traffic between existing stack and new stack, running different versions of the web portal using Blue/Green deployment strategy by using Route53. Route only a small portion of incoming production traffic to use the new application stack while maintaining the old application stack. Check the features of the new portal; once it’s 100% validated, slowly increase incoming production traffic to the new stack. If there are issues on the new stack, change Route53 to revert to old stack.
Forcibly upgrade the existing application stack in Production to be PCI-DSS compliant. Once done, deploy the new version of the web portal on the existing application stack.
A government agency has multiple VPCs in various AWS regions across the United States that need to be linked up to an on-premises central office network in Washington, D.C. The central office requires inter-region VPC access over a private network that is dedicated to each region for enhanced security and more predictable data transfer performance. Your team is tasked to quickly build this network mesh and to minimize the management overhead to maintain these connections.
Which of the following options is the most secure, highly available, and durable solution that you should use to set up this kind of interconnectivity?
Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN.
Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.
Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public Internet.
A company has a hybrid set up for its mobile application. The on-premises data center hosts a 3TB MySQL database server that handles the write-intensive requests from the application. The on-premises network is connected to the AWS VPC with a VPN. On AWS, the serverless application runs on AWS Lambda and API Gateway with an Amazon DynamoDB table used for saving user preferences. The application scales well as more users are using the mobile app. The user traffic is unpredictable but there is an average increase of about 20% each month. A few months into operation, the company noticed the exponential increase of costs for AWS Lambda. The Solutions Architect noticed that the Lambda execution time averages 4.5 minutes and most of that is wait time due to latency when calling the on-premises data MySQL server.
Which of the following solutions should the Solutions Architect implement to reduce the overall cost?
- Migrate the on-premises MySQL database server to Amazon RDS for MySQL. Enable Multi-AZ to ensure high availability.
- Create a CloudFront distribution with the API Gateway as the origin to cache the API responses and reduce the Lambda invocations.
- Gradually lower the timeout and memory properties of the Lambda functions without increasing the execution time.
- Configure Auto Scaling on Amazon DynamoDB to automatically adjust the capacity with user traffic and enable DynamoDB Accelerator to cache frequently accessed records.
- Provision an AWS Direct Connect connection from the on-premises data center to Amazon VPC instead of a VPN to significantly reduce the network latency to the MySQL server.
- Create a CloudFront distribution with the API Gateway as the origin to cache the API responses and reduce the Lambda invocations.
- Convert the Lambda functions to run them on Amazon EC2 Reserved Instances. Use Auto Scaling on peak time with a combination of Spot instances to further reduce costs.
- Configure Auto Scaling on Amazon DynamoDB to automatically adjust the capacity with user traffic.
1.Provision an AWS Direct Connect connection from the on-premises data center to Amazon VPC instead of a VPN to significantly reduce the network latency to the MySQL server.
2. Configure caching on the mobile application to reduce the overall AWS Lambda function calls.
3. Gradually lower the timeout and memory properties of the Lambda functions without increasing the execution time.
4. Add an Amazon Elasticache cluster in front of DynamoDB to cache the frequently accessed records.
- Migrate the on-premises MySQL database server to Amazon RDS for MySQL. Enable Multi-AZ to ensure high availability.
- Configure API caching on Amazon API Gateway to reduce the overall number of invocations to the Lambda functions.
- Gradually lower the timeout and memory properties of the Lambda functions without increasing the execution time.
- Configure Auto Scaling on Amazon DynamoDB to automatically adjust the capacity based on user traffic.
An international foreign exchange company has a serverless forex trading application that was built using AWS SAM and is hosted on AWS Serverless Application Repository. They have millions of users worldwide who use their online portal 24/7 to trade currencies. However, they are receiving a lot of complaints that it takes a few minutes for their users to log in to their portal lately, including occasional HTTP 504 errors. As the Solutions Architect, you are tasked to optimize the system and to significantly reduce the time to log in to improve the customers’ satisfaction.
Which of the following should you implement in order to improve the performance of the application with minimal cost? (Select TWO.)
Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.
Set up multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. Deploy the Lambda function in each region using AWS SAM, in order to handle the requests faster.
Use Lambda@Edge to allow your Lambda functions to customize content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users.
Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age.
Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.