Questions 100-167 Flashcards
Q167 A Solution Architect must build a highly available infrastructure for a popular global video game that runs on a mobile phone platform. The application runs on Amazon Ec2 instances behind an application load balancer .the instances run in an Autoscaling group across multiple availability zones.
The database tier is an Amazon RDS MySQL Multi-AZ Instances. The Entire Application stack is deployed in both us east-1 and Eu-central-1. Amazon Route 53 is used to route traffic to the two installations using latency-based routing policy. A weighted routing policy is configured in Route 53 as failover to another region in case the installation in the region becomes unresponsive.
During the testing of disaster scenarios after blocking access to the Amazon RDS MYSQL instance in EU-central-1 from all the application instances running in that region Route 53 does not automatically failover all traffic to us east-1.Based on this situation which changes would allow the infrastructure to failover to us east-1 (select two)
A- Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.
B- Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in EU-central-1.
C- Set the value of evaluating Target health to Yes on the latency alias resources for both EU-central-1 and us-east-1
D- Write a URL in the application that performs a health check on the Database layer. Add it as health within the weighted routing policy in both regions
E Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both EU-central-1 and us-east-1 and set a weight of 100 for the record pointing to primary in both EU-central- and us east-1 and set a weight of 100 for the primary Application Load balancer only in the region that has health resources.
A- Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.
B- Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in EU-central-1.
Q166 - The database tier is an Amazon RDS MySQL Multi-AZ Instances. The Entire Application stack is deployed in both us east-1 and Eu-central-1. Amazon Route 53 is used to route traffic to the two installations using latency-based routing policy. A weighted routing policy is configured in Route 53 as failover to another region in case the installation in the region becomes unresponsive.
During the testing of disaster scenarios after blocking access to the Amazon RDS MYSQL instance in eu-central-1 from all the application instances running in that region Route 53 does not automatically failover all traffic to us east-1.
Based on this situation which changes would allow the infrastructure to failover to Us-east 12? (select two)
A Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.
B Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in eu-central-1.
C Set the value of evaluating Target health to Yes on the latency alias resources for both EU-central-1 and us-east-1
D Write a URL in the application that performs a health check on the Database layer. Add it as health within the weighted routing policy in both regions
E Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both eu-central-1 and us-east-1 and set a weight of 100 for the record pointing to primary in both eu-central- and us east-1 and set a weight of 100 for the primary Application Load balancer only in the region that has health resources.
A- Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.
B- Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in EU-central-1.
Q165
The CISO of a large enterprise with multiple IT departments, each with its own AWS account, wants one central place where AWS permission for users can be managed and users authentication credentials can be synchronized with the company existing on-premises solution. Which solution will meet the CISO requirements?
A- Define AWS IAM roles based on the functional responsibilities of the users in a central acct. Create SAML- based identity management provider Map user in the on-Premises groups to IAM roles. Establish a trust relationship between the other accounts and the central account.
B- Deploy a common set of AWS IAM users group roles and policies in all the AWS accounts using AWS Organizations. Implement federation between the on-premises identity provider and the AWS accounts.
C- Use AWS Organization in a centralized account to define service control policies (SCP)s Create a SAML - based identity management provider in each account and map users in the on-premises groups to AWS IAM roles
D- Perform a thorough analysis of the user base and create AWS IAM user accounts that have the necessary permissions. Set a process to provision and de-provision accounts based on data in the on-premises solution.
A- Define AWS IAM roles based on the functional responsibilities of the users in a central acct. Create SAML- based identity management provider Map user in the on-Premises groups to IAM roles. Establish a trust relationship between the other accounts and the central account.
Q164
What combination of steps could Solution Architect take to protect a Web workload running on Amazon EC2 from DDoS application layer attacks? Select two
A- Put the EC2 instances behind a Network Load balancer and configure AWS WAF on it.
B- Migrate the DNS to Amazon Route 53 and use the AWS shield.
C- Put the EC2 Instances in an Auto Scaling group and configure AWS WAF on it.
D- Create and use an Amazon CloudFront distribution and configure AWS WAF on it.
E- Create and use an internet gateway in the VPC and AWS Shield.
B- Migrate the DNS to Amazon Route 53 and use the AWS shield.
D- Create and use an Amazon CloudFront distribution and configure AWS WAF on it.
Q163
Solution Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 Instances.
The solution architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must ensure that the recovered instances maintain the same IP address.
How can these requirements be met?
A- Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly
B- Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1
C- Create a new t2.micro instance to monitor the cluster instances.Configure the t2-micro instance to issue an aws ec2 reboot-instances command upon Failure
D- Create an Amazon CloudWatch alarm for the StatusCheckFailed_system metric, and then configure an Ec2 action to recover the instance.
D- Create an Amazon CloudWatch alarm for the StatusCheckFailed_system metric, and then configure an Ec2 action to recover the instance.
Q162
A company is currently using AWS code commit for its source control and AWS Codepipeline for continuous integration. The pipeline has a build stage for building the artifacts, which is then staged in an Amazon S3 bucket.
The company has identified various improvement opportunities in the existing process, and the solution architect has been given the following requirements.
Create a new pipeline to support feature development support feature development without impacting production applications.
Incorporate continuous testing with unit tests
Isolate development and production artifacts
Support the capability to manage rested code into production code.
How should the solution Architect achieve these requirements?
A- Trigger a separate pipeline from CodeComit feature branches. Use AWS Codebuild for running unit tests. Use code build to stage the artifacts within an S3 bucket in a separate testing account
B- Trigger a separate pipeline from CodeCommit feature branches. Use AWS Lambda for running unit tests. Use AWS Codedeploy to stage the artifacts within an s3 bucket ina separate testing Account.
C Trigger a separate pipeline from Codecommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with 53 as the target for staging the artifacts within an S3 bucket in a separate testing account
D- Create a separate CodeCommit repository for development and use it to trigger the pipeline. Use AWS lambda for running unit tests. Use AWS code build to stage the artifacts within different S3 Buckets in the same production account.
A- Trigger a separate pipeline from CodeComit feature branches. Use AWS Codebuild for running unit tests. Use code build to stage the artifacts within an S3 bucket in a separate testing account
Q161
A company has an internal AWS Elastic Beanstalk worker environment inside a VPC that must access an external payment gateway API available on an HTTPS endpoint on the public internet. Because of security policies. the payment gateway’s Application team can grant access to only one public IP address. Which architecture will set up an Elastic beanstalk environment to access the company application without multiple changes on the company ends?
A- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the payment gateway application side.
B- Configure the Elastic bean Stalk application to place Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the payment gateway application side.
C- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a private subnet.Set an HTTPS_PROXY application parameter to send outbound HTTPS Connection to an EC2 Proxy server deployed in a public subnet. Associate an Elastic IP address to the Ec2 proxy host that can be whitelisted on the payment gateway application side.
D- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a public subnet. Set the HTTPS PROXY and NO PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 Proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 Proxy host that can be whitelisted on the payment gateway application side.
C- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a private subnet. Set an HTTPS_PROXY application parameter to send outbound HTTPS Connection to an EC2 Proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the payment gateway application side.
Q160
A company must deploy multiple independent instances of an application. The front-end application is internet accessible. However, the corporate policy stipulates that the backends are to be isolated from each other and the internet yet accessible from a centralized administration server. The application setup should be automated to minimize the opportunity for mistakes as new instances are deployed.
Which option meets the requirements and minimizes costs?
A- Use an AWS cloud formation template to create identical IAM roles for each region. Use the AWS CloudFormation template to create identical IAM roles for each region. Use AWS cloud formation Stack sets to deploy each application instances by using parameters to customize for each instance, and use security groups to isolate each instance while permitting access to the central server.
B - Create each instance of the application IAM roles and responses in separate accounts by using AWS CloudFormation StackSets.Include a VPN connection to the VPN gateway of the central administration server.
C- Duplicate the application IAM roles and resources in separate accounts by using a single AWS cloud formation template. Include VPC peering to connect the VPC of each application instance to a central VPC.
D- Use the parameters of the AWS CloudFormation templates to customize the deployment into separate accounts. Include a NAT gateway to allow communication back to the central administration server.
D- Use the parameters of the AWS CloudFormation templates to customize the deployment into separate accounts. Include a NAT gateway to allow communication back to the central administration server.
Q159
A company has deployed an application to multiple environments in AWS.including production and testing. The company has separate accounts for production and testing and users are allowed to create additional application users for team members or services, as needed. The Security team has asked the Operation team for better isolation between production and testing with centralized controls on security credentials and improved management of permissions between environments.
Which of the following option would MOST securely accomplish this goal?
A- Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
B- Modify permission in the production and testing accounts to limit creating new IAM users to members of the Operation team. Set a strong IAM password policy on each account. Create new IAM users and groups in each account to limit developer access to just the services required to complete their job functions.
C- Create a script that runs on each account that checks the user accounts for adherence to a security policy.Disable any user on service accounts that do not comply
D- Create all user accounts in the production account, Create roles for access in the production account and testing accounts. Grant cross-account access from the production account to the testing accounts.
A- Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
Q158.
A company has an Amazon EC2 deployment that has the following architecture, An application tier that contains 8 m4.xlarge instances.
A Classic Load Balancer
Amazon S3 as a persistent data store
After one of the Ec2 Instances fails, users report very slow processing of their requests. A Solution Architect must recommend design changes to maximize system reliability. The solution must minimize costs.
What should the solution Architect recommend?
A- Migrate the existing EC2 Instances to a serverless deployment using AWS Lambda functions.
B- Change the Classic Load Balancer to an Application Load Balancer
C- Replace the Application tier with m4 large instances in an Autoscaling group
D- Replace the application tier with 4 M4 2xlarge instances.
C- Replace the Application tier with m4 large instances in an Autoscaling group
Q157,
A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 MPBS AWS Direct Connect link and a Separate, fully available 1 Gbps ISP connection. A Solution Architect must transfer 20 TB of data from the data center to an amazon s3 bucket.
What is the fastest way to transfer the data?
A- upload the data to the S3 bucket using the existing DX link
B- Send the data to AWS using the AWS Import/Export Service
C- Upload the data using an 80 TB AWS Snowball device
D - Upload the data to the S3 Bucket using S3 Transfer Acceleration
A- upload the data to the S3 bucket using the existing DX link
Q156,
A company is running an email application across multiple AWS Regions. The company uses Ohio (us-east-2) as the primary Region and northern Virginia (us-east-1) as the Disaster Recovery (DR)Region. The data is continuously replicated from the primary region to the DR Region by a single instance on the public subnet in both Regions. The replication messages between the regions have a significant backlog during certain times of the day. The backlog cleans on its own after a short time, but it affects the application RPO.
Which of the following Solution should help remediate this performance problem?
(select 2)
A- Increase the size of the instances
B- Have the instance in the primary Region write the data to an Amazon SQS queue in the primary region instead and have the instance in the DR Region poll from this queue
C- Use multiple instances on the primary and DR Region to send and receive the replication data
D- Change the DR Region to Oregan(us-west-2) instead of the current DR Region.
B- Have the instance in the primary Region write the data to an Amazon SQS queue in the primary region instead and have the instance in the DR Region poll from this queue
C- Use multiple instances on the primary and DR Region to send and receive the replication data
Q155-
A hybrid network architecture must be used during a company multi-year data center migration from multiple private data centers to AWS. The current data centers are linked together with private fiber. Due to unique legacy applications.NAT cannot be used. During the migration period, many applications will need access to other applications in both the data centers and AWS.
Which option offers a hybrid network architecture that is secure and highly available, that allows for high bandwidth and a multi-region deployment post-migration?
A-Use AWS Direct Connect to each data center from different ISPs and configure routing to failover to the other data center Direct Connect if one fails. Ensure that no VPC CIDR blocks overlap with one another or the On-prem network.
B- Use multiple hardware VPN connections to AWS from the on- Prem data center. Route different subnet traffic through different VPN connections. Ensure that no VPC CIDR blocks overlap one another or the on-premises network.
C- Use AWS Direct Connect and a VPN as a backup, and Configure both to use the same virtual private gateway and BGP, Ensure that no VPC CIDR blocks overlap one another or the on-premises network.
A-Use AWS Direct Connect to each data center from different ISPs and configure routing to failover to the other data center Direct Connect if one fails. Ensure that no VPC CIDR blocks overlap with one another or the On-prem network.
Q154-
An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business logic, and a database tier for the user and transactional data management. The database server has a 100 GB memory requirements. The business requires cost-efficient disaster recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The Business also has a regulatory requirement for out of region disaster recovery with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the solution Architect design to create a comprehensive solution for this customer that meets the DR requirements?
A- Back up the application and database data frequently and copy them to Amazon S3.Replicate the backup using S3 cross-region replication and use AWS CloudFormation to instantiate infrastructure for disaster recovery and restore data from Amazon S3
B- Employ a pilot light environment in which the primary database is configured with mirroring to build a standby database on m4 large in the alternate region. Use AWS CloudFormation to instantiate the web servers, application server and load balancers in case of a Disaster to bring the application up in the alternate region, vertically resize the database to meet the full production demands and use Amazon route 53 to switch traffic to the alternate region.
C- Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the webserver, one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the application tiers in an Auto Scaling group behind a load balancer, which can automatically scale when the load arrives to the application. Use Amazon Route 53 to switch traffic to the alternate region.
C- Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the webserver, one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the application tiers in an Auto Scaling group behind a load balancer, which can automatically scale when the load arrives to the application. Use Amazon Route 53 to switch traffic to the alternate region.
Q153-
An E-commerce company is revamping its IT Infrastructure and is planning to use AWS services. The Company CIO has asked a Solution Architect to design a simple, highly available, and loosely coupled order processing application. The Application is responsible for receiving and processing orders before storing them in an Amazon Dynamo DB table. The application has a sporadic traffic pattern and should be able to scale during marketing campaigns to process the orders with minimal delays
Which of the following is the MOST reliable approach to meet the requirements?
A- Receive the orders in an Amazon EC2 hosted database and use EC2 Instances to process them.
B- Recieve the orders in the Amazon SQS queue and Trigger an AWS Lambda function to process them.
C- Recieve the order using the AWS step Functions program and trigger an Amazon ECS container to process them.
D- Revieve the order in Amazon Kinesis Data Streams and Use Amazon EC2 Instances to process them
B- Recieve the orders in the Amazon SQS queue and Trigger an AWS Lambda function to process them.
Q152.
A company CFO recently analyzed the company’s AWS monthly bill and identified an opportunity to reduce the cost for AWS Elastic Beanstalk environments in use. The CFO has asked a Solutions Architect to design a highly available solution that will spin up an Elastic Beanstalk environment in the
morning and terminate it at the end of the day.
The solution should be designed with minimal operational overhead and to minimize costs. It should also
be able to handle the increased use of Elastic Beanstalk environments among different teams, and must
provide a one-stop scheduler solution for all teams to keep the operational costs low.
What design will meet these requirements?
A. Set up a Linux EC2 Micro instance. Configure an IAM role to allow the start and stop of the Elastic Beanstalk environment and attach it to the instance. Create scripts on the instance to start and stop the Elastic Beanstalk environment. Configure Cron jobs on the instance to execute the scripts.
B. Develop AWS Lambda functions to start and stop the Elastic Beanstalk environment. Configure a Lambda execution role granting Elastic Beanstalk environment start/stop permissions, and assign the role to the Lambda functions. Configure Cron expression Amazon CloudWatch Events rules to trigger the
Lambda functions.
C. Develop an AWS Step Functions state machine with ·wait” as its type to control the start and stop time.
Use the activity task to start and stop the Elastic Beanstalk environment. Create a role for Step Functions
to allow it to start and stop the Elastic Beanstalk environment. Invoke Step Func!ions daily.
D. Configure a time-based Auto Scaling group. In the morning, have the Auto Scaling group scale up an
Amazon EC2 instance and put the Elastic Beanstalk environment start command in the EC2 instance
user data. Al the end of the day, scale down the instance number to Oto terminate the EC2 instance.
B. Develop AWS Lambda functions to start and stop the Elastic Beanstalk environment. Configure a Lambda execution role granting Elastic Beanstalk environment start/stop permissions, and assign the role to the Lambda functions. Configure Cron expression Amazon CloudWatch Events rules to trigger the
Lambda functions.
Q 151
A company has an Amazon VPC that is divided into a public subnet and a private subnet. A web
application runs in Amazon VPC. and each subnet has its own NACL The public subnet has a CIDR of
10.0.0.0/24. An Application Load Balancer is deployed to the public subnet. The private subnet has a
CIOR of 10.0.1.0/24. Amazon EC2 instances that run a web server on port 80 are launched into the
private subnet. Only network traffic that is required for the Application Load Balancer to access the web
application can be allowed to travel between the public and private subnets.
What collection of rules should be written to ensure that the private subnet·s NACL meets the
requirement? (Select TWO.)
A. An inbound rule for port 80 from source 0.0.0.0/0
B. An inbound rule for port 80 from source 10.0.0.0/24
C. An outbound rule for port 80 to destination 0.0.0.0/0
D. An outbound rule for port 80 to destination 10.0.0.0/24
E. An outbound rule for ports 1024 through 65535 to destination 10.0.0.0/24
B. An inbound rule for port 80 from source 10.0.0.0/24
D. An outbound rule for port 80 to destination 10.0.0.0/24
Q 150 A solution Architect is redesigning an image viewing and messaging platform to be delivered as SaaS Currently there is a farm of virtual desktop infrastructure (VDI )that runs a desktop image viewing.
application and a desktop messaging application Both applications use a shared database to manage
user accounts and sharing Users log in from a web portal that launches the applications and streams the
view of the application on the user’s machine. The Development Operations team wants to move away
from using VD! and wants to rewrite the application
What is the MOST cost-effective architecture that offers both security and ease of management?
A. Run a website from an Amazon S3 bucket with a separate S3 bucket for Images and messaging data. Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for a user and sharing management.
B. Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3. and use Amazon
Cognito for user accounts and sharing Create AWS CloudFormation templates to launch the application
by using EC2 user data to install and configure the application.
C. Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using
an Amazon RDS database for user accounts and sharing Create AWS CloudFormation templates to
launch the application and perform blue-green deployments.
D. Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and messenger that stores images in Amazon S3 Have the website use an Amazon RDS database for user accounts and sharing
A. Run a website from an Amazon S3 bucket with a separate S3 bucket for Images and messaging data. Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for a user and sharing management.
Q149
A company stores sales transactions in data in amazon Dynamo DB tables. To detect anomalies behavior and respond quickly all changes to the items stored in the Dynamo DB tables must be logged within 30 minutes.
Which solution meets the requirements?
A- Copy the DynamoDB tables into Apache Hive tables on Amazon EMPR every hour and analyze them for anomalous behaviors. Send Amazon SNS notification when anomalous behavior is detected
B- Use AWS cloud Trail to capture all the APIs that change the Dynamo DB tables. Send SNS notification when anomalous behavior are detected using Cloud Trail filtering.
C-Use Amazon Dynamo DB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notification when anomalous are detected.
C-Use Amazon Dynamo DB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notification when anomalous are detected.
Q148. To abide by industry regulations, a Solutions Architect must design a solution that will store a company”s
critical data in multiple public AWS Regions. including In the United States. where the company”s headquarters is located. The Solutions Architect is required to provide access to the data stored in AWS to the company”s global WAN network. The Security team mandates that no traffic accessing this data
should traverse the public internet.
How should the Solutions Architect design a highly available solution that meets the requirements and is
cost-effective?
A. Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in
use.
Use the company WAN to send traffic over to the headquarters and then to the respective DX connection
to access the data.
B. Establish sh two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send traffic over a DX connection. Use inter-region VPC peering to access the data in other AWS regions.
C. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region.
Use the company WAN to send traffic over a DX connection Use an AWS Transit VPC solution to access
data in other AWS Regions.
D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions
D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions
A company wants to host its website on AWS using server1ess architecture design patterns for global
customers.
The company has its requirements as follows:
- The website should be responsive
- The website should offer minimal latency
- The website should be highly available
- Users should be able to authenticate through social identity providers such as Google. Facebook. and Amazon
- There should be baseline DDoS protections for spikes in traffic
How can the design requirements be met?
A. Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Seuets Manager to
provide user management and authentication functions. Use ECS Docker containers to build an API.
B. Use Amazon Route 53 latency routing with an Application load balancer and AWS Fargate in different
regions for hosting the website. Use Amazon Cognito to provide user management and authentication
functions. Use Amazon EKS containers to build an API
C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and authentication functions. Use Amazon API Gateway with AWS Lambda to build an API
D. Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resources.
Use Amazon Cognito to provide-user management and authentication functions Use AWS Lambda to
build an API
C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and authentication functions. Use Amazon API Gateway with AWS Lambda to build an API
Q146. A company has multiple AWS accounts hosting IT applications. An Amazon CloudWatch Logs agent is
installed on all Amazon EC2 instances. The company wants to aggregate all security events in a centralized AWS account dedicated to tog storage Security Administrators need to perform near-real-time gathering and correlating of events across multiple AWS accounts. Which solution satisfies these requirements?
A. Create a Log Audit IAM role ·in each application AWS account with permissions to view CloudWatch
Logs, configure an AWS Lambda function to assume the Log Audit role. and perform an hourly export of
CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account
B. Configure CloudWatch Logs streams in each application AWS account to forward events to
CloudWatch Logs in the logging AWS account. In the logging AWS account, subscribe an Amazon
Kinesis Data Firehose stream to Amazon CloudWatch Events and use the stream to persist log data in
Amazon S3
C. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source and persist the log data in an Amazon S3 bucket inside the
logging AWS account
D. Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the
logging AWS account, use an AWS Lambda function to read messages from the stream and push
messages to Data Firehose, and persist the data in Amazon S3
C. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source and persist the log data in an Amazon S3 bucket inside the
logging AWS account
Q145 A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket.
The company requires that only authenticated users are allowed to. post content. The application generates a pre-signed URL that is used to upload objects through a browser interface. Most users are reporting slow upload limes for objects larger than 1 DO MB. What can a Solutions Architect, do to improve the performance of these uploads while ensuring only authenticated users are allowed to post content?
A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the ·s3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the.browser interface use API.Gateway instead of the pre-signed URL to upload objects
B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service
proxy. Configure the PUT method for this resource to eXP.OS.e the S3 PulObject operation. Secure the API
Gateway using an AWS Lambda authorizer. Have the browser interface use API Gateway instead of the
pre-signed URL to upload objects
C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the
pre-signed URL. Have the browser interface upload the objects lo this URL using the S3 multiple upload API
D. Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST
methods for the CloudFront cache behavior. Update the CloudFront origin to use an origin access identity
(OAI), Give the OAI user s3: Put object permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution
C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the
pre-signed URL. Have the browser interface upload the objects lo this URL using the S3 multiple upload API
Q144 A company needs to cost-effectively persist small data records (up-to 1 KB) for up to 30 days.
The data is read rarely. When reading the data, a 5 -minute delay is acceptable. Which of the following solutions achieve this goal? (Select TWO.)
A. Use Amazon S3 to collect multiple records in one S3 object. Use a lifecycle configuration to move data to Amazon Glacier immediately. after write. Use expedited retrievals when reading the data.
B. Write the records to AWS Kinesis Data Firehose and configure Kinesis Data Firehose to deliver the
data to Amazon S3 after 5 minutes. Set an expiration action at 30 days on the S3 bucket.
C. Use an AWS Lambda function invoked via Amazon API Gateway to collect data for 5 minutes. Write data to Amazon S3 just before the Lambda execution stops
D. Write the records to Amazon DynamoDB configured with a Time To Live (TTL) of 30 days. Read data using the Get item or Batch Get-Item call.
E. Write the records to an Amazon ElastiCache to Redis Configure the Redis append-only file (AOF)
persistence logs to write to Amazon S3. Recover from the log if the ElastiCache instance has failed.
A. Use Amazon S3 to collect multiple records in one S3 object. Use a lifecycle configuration to move data
to Amazon Glacier immediately. after write. Use expedited retrievals when reading the data.
C. Use an AWS Lambda function invoked via Amazon API Gateway to collect data for 5 minutes. Write
data to Amazon S3 just before the Lambda execution stops
- A company manages more than 200 separate internet-facing web applications. All of the applications are
deployed to AWS in a single AVVS Region. The fully qualified domain names (FQDNS) of all of the applications are made available through HTTPS using Application Load Balancers (ALBs). The ALBs are configured to use public SSL/TLS certificates
A Solutions Architect needs to migrate the web applications to a multi-region architecture. All HTTPS
services should continue to work without interruption
Which approach meets these requirements?
A . Request a certificate for each FQDN using AWS K M S. Associate the certificates with the ALBS in the
primary AWS Region. Enable cross-region availability in AWS KMS for the certificates and associate the
certificates with the ALBs in the secondary AWS Region.
B. Generate the key pairs and certificate requests for each FQDN using AWS KMS. Associate the
certificates with the ALBs in both the primary and secondary AWS Regions.
C. Request a certificate for each FQDN using AWS Certificate Manager. Associate the certificates with
the ALBs in both the primary and secondary AWS Regions.
D. Request certificates for each FQDN in both the primary and secondary AWS Regions using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in each AWS Region.
D. Request certificates for each FQDN in both the primary and secondary AWS Regions using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in each AWS Region.
Q142. A company’s data center is connected to the AWS Cloud over a minimally used 10-Gbps AWS Direct
Connect connection with a private virtual interface to its virtual private cloud (VPC). The company internet
connection is 200 Mbps, and the company has a 150-TB dataset that is created each Friday. The data
must be transferred and available in Amazon S3 on Monday morning.
Which is the LEAST expensive way to meet the requirements while allowing for data transfer growth?
A. Order two 80-GB AWS Snowball appliances. Offload the data to the appliances and ship them to AWS.
AWS will copy the data from the Snowball appliances to Amazon S3
B. Create a VPC endpoint for Amazon S3. Copy the data to Amazon S3 by using the VPC endpoint,
forcing the transfer to use the Direct Connect connection.
C. Create a VPC endpoint for Amazon S3. Set up a reverse proxy farm behind a Classic Load Balancer in
the VPC. Copy the data to Amazon S3 using the proxy.
D. Create a public virtual interface on a Direct Connect connection and copy the data to Amazon S3 over
the connection.
D. Create a public virtual interface on a Direct Connect connection and copy the data to Amazon S3 over
the connection.
Q141 - An organization has a write-intensive mobile application that uses the Amazon API Gateway. AWS Lambda, and Amazon DynamoDB.
The application has scaled well; however, costs have increased exponentially because of higher than anticipated Lambda costs.
The application’s use is unpredictable. but there has been a steady 20% increase in utilization every month.
While monitoring the current Lambda functions, the Solutions Architect notices that the execution-time
averages 4.5 minutes. Most of the wait time is the result of a high-latency network call to a 3-TB MySQL database server that is on-premises.
A VPN is used to connect to the VPC, SO the Lambda functions have been configured with a five-minute timeout.
How can the Solutions Architect reduce the cost of the current architecture?
A- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Enable local caching in the mobile application to reduce the Lambda function invocation calls.
- Monitor the Lambda function performance, gradually adjust tf\e timeout and memory properties to lower values while maintaining an acceptable execution time.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache
- Monitor the Lambda function performance, gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache
B - Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database. Cache the API Gateway results to Amazon CloudFront. Use Amazon EC2 Reserved Instances instead of Lambda Enable Auto Scaling on EC2 and use Spot Instances during peak times. Enable DynamoDB Auto Scaling to manage target utilization.
C- Migrate the MySQL database server into a Mum-AZ Amazon RDS for MySQL. Enable caching or the Amazon API Gateway results In Amazon CloudFront to reduce the number of Lambda function invocations. Monitor the Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
Enable DynamoDB Accelerator for frequently accessed records, and enable the DynamoDB Auto Scaling feature
D- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable API caching on API Gateway to reduce the number of Lambda function invocations. Continue to monitor the AWS Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time. Enable Auto Scaling in DynamoDB.
D-
Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable API caching on API Gateway to reduce the number of Lambda function invocations.
Continue to monitor the AWS Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
Enable Auto Scaling in DynamoDB.
Q140. A Solutions Architect must create a cost-effective backup solution for a company”s 500MB source code
repository of proprietary and sensitive applications. The repository runs on Linux and backs up daily to
tape. Tape backups are stored for 1 year.
The current solution is not meeting the company’s needs because it is a manual process that is prone to
error, expensive to maintain, and does not meet the need for a Recovery Point Objective (RPO) of 1 hour
or Recovery Time Objective (RTO) of 2 hours. The new disaster recovery requirement is for backups to
be stored offsite and to be able to restore a single file if needed.
Which solution meets the customer’s needs for RTO, RPO, and disaster recovery with the LEAST effort
and expense?
A. Replace local tapes with an AWS Storage Gateway virtual tape library to integrate with current backup
software. Run backups nightly and store the virtual tapes on Amazon S3 standard storage in US-EAST-1
Use cross-region replication to create a second copy in US-WEST-2. Use Amazon S3 lifecycle policies
to perform automatic migration to Amazon Glacier and deletion of expired backups after 1 year.
B - Configure the local source code repository to synchronize files to an AWS Storage Gateway file
Amazon gateway to store backup copies in an Amazon S3 Standard bucket. Enable versioning on the
Amazon S3 bucket Create Amazon S 3 lifecycle policies to automatically migrate old versions of objects to
Amazon S3 Standard - Infrequent Access, then Amazon Glacier, then delete backups after 1 year.
C. Replace the local source code repository storage with a Storage Gateway stored volume. Change the
default snapshot frequency to 1 hour. Use Amazon S3 lifecycle policies to a.rchive snapshots to Amazon
Glacier and remove old snapshots after 1 year. Use cross-region replication to create a copy of the
snapshots in US-WEST-2.
D. Replace the local source code repository storage with a Storage Gateway cached volume. Create a
snapshot schedule to take hourly snapshots. Use an Amazon CloudWatch Events schedule expression
rule to run an hourly AWS Lambda task to copy snapshots from US-EAST-1 to US-WEST-2
A. Replace local tapes with an AWS Storage Gateway virtual tape library to integrate with current backup
software. Run backups nightly and store the virtual tapes on Amazon S3 standard storage in US-EAST-1
Use cross-region replication to create a second copy in US-WEST-2. Use Amazon S3 lifecycle policies
to perform automatic migration to Amazon Glacier and deletion of expired backups after 1 year.
Q139. A company is running multiple applications on Amazon EC2. Each application is deployed and managed
by multiple business units. All applications are deployed on a single AWS account but on different virtual
private clouds (VPCs). The company uses a separate VPC in the same account for test and development
purposes.
Production applications suffered multiple outages when users accidentally terminated and modified
resources that belonged to another business unit. A Solutions Architect has been asked to improve the availability of the company applications while allowing the Developers access to the resources they need.
Which option meets the requirements with the LEAST disruption?
A. Create an AWS account for each business unit. Move each business unit instances to its own
account and set up a federation to allow users to access their business unit”s account.
B. Set up a federation to allow users to use their corporate credentials and lock the users down to their
own VPC. Use a network ACL to block each VPC from accessing other VPCS.
C. Implement a tagging policy based on business units. Create an IAM policy so that each user can terminate instances belonging to their own business units only.
D. Set up role-based access for each user and provide limited permissions based on individual roles and
the services for which each user is responsible.
C. Implement a tagging policy based on business units. Create an IAM policy so that each user can terminate instances belonging to their own business units only.
- A company with several AWS accounts is using AWS Organizations and se.rvice control policies (SCPs).
An Administrator created the following SCP and has attached it to an organizational unit (OU) that contains AWS account 1111-1111-1111:
Developers working in account 1111-111 1-1111 complain that they cannot create Amazon S3 buckets.
How should the Administrator address this problem?
A. Add s3: CreateBucket with the “Allow” effect to the SCP.
B. Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111
C. Instruct the Developers to add Amazon S3 permissions to their IAM entities.
D Remove the SCP from account 1111-1111-1111
C. Instruct the Developers to add Amazon S3 permissions to their IAM entities.
- A company wants to move a web application to AWS. The application stores session information locally
on each web server. which will make auto-scaling difficult. As part of the migration. the application will be
rewritten to decouple the session data from the web servers. The company requires low latency,
scalability, and availability
Which service will meet the requirements for storing the session information in the MOST cost-effective
way?
A. Amazon ElastiCache with the Memcached engine
B. Amazon S3
C. Amazon RDS MySQL
D. Amazon ElastiCache with the Redis engine
C. Amazon RDS MySQL
0136.
A company is running a high-user-volume media-sharing application on-premises. It currently hosts about
400 TB of data with millions of video files. The company is migrating this application to AWS to improve
reliability and reduce costs. The Solutions Architecture team plans to store the videos in an Amazon S3 bucket and use Amazon CloudFront to distribute video users. The company needs to migrate this application to AWS within 10
days with the least amount of downtime possible. The company currently has 1 Gbps connectivity to the internet with 30 percent free capacity. Which of the following solutions would enable the company to migrate the workload to AWS and meet all of the requirements?
A. Use a multi-part upload in Amazon S3 client to parallel upload the data to the Amazon S3 bucket over
the internet. Use the throttling feature to ensure that the Amazon S3 client does not use more than 30
percent of available internet capacity
B. Request an AWS Snowmobile with 1 PB capacity to be delivered to the data center. Load the data into
Snowmobile and send it back to have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight.
C. Use an Amazon S3 client to transfer data from the data center to the Amazon S3 bucket over the
internet. Use the throttling feature to ensure the Amazon S3 client does not use more than 30 percent of
available internet capacity
D. Request multiple AWS Snowball devices to be delivered to the data center Load the data concurrently into these devices and send it back. Have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight.
D. Request multiple AWS Snowball devices to be delivered to the data center Load the data concurrently
into these devices and send it back. Have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight.