Questions 100-167 Flashcards

1
Q

Q167 A Solution Architect must build a highly available infrastructure for a popular global video game that runs on a mobile phone platform. The application runs on Amazon Ec2 instances behind an application load balancer .the instances run in an Autoscaling group across multiple availability zones.

The database tier is an Amazon RDS MySQL Multi-AZ Instances. The Entire Application stack is deployed in both us east-1 and Eu-central-1. Amazon Route 53 is used to route traffic to the two installations using latency-based routing policy. A weighted routing policy is configured in Route 53 as failover to another region in case the installation in the region becomes unresponsive.
During the testing of disaster scenarios after blocking access to the Amazon RDS MYSQL instance in EU-central-1 from all the application instances running in that region Route 53 does not automatically failover all traffic to us east-1.Based on this situation which changes would allow the infrastructure to failover to us east-1 (select two)

A- Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.

B- Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in EU-central-1.

C- Set the value of evaluating Target health to Yes on the latency alias resources for both EU-central-1 and us-east-1

D- Write a URL in the application that performs a health check on the Database layer. Add it as health within the weighted routing policy in both regions

E Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both EU-central-1 and us-east-1 and set a weight of 100 for the record pointing to primary in both EU-central- and us east-1 and set a weight of 100 for the primary Application Load balancer only in the region that has health resources.

A

A- Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.

B- Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in EU-central-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q166 - The database tier is an Amazon RDS MySQL Multi-AZ Instances. The Entire Application stack is deployed in both us east-1 and Eu-central-1. Amazon Route 53 is used to route traffic to the two installations using latency-based routing policy. A weighted routing policy is configured in Route 53 as failover to another region in case the installation in the region becomes unresponsive.
During the testing of disaster scenarios after blocking access to the Amazon RDS MYSQL instance in eu-central-1 from all the application instances running in that region Route 53 does not automatically failover all traffic to us east-1.

Based on this situation which changes would allow the infrastructure to failover to Us-east 12? (select two)

A Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.

B Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in eu-central-1.

C Set the value of evaluating Target health to Yes on the latency alias resources for both EU-central-1 and us-east-1

D Write a URL in the application that performs a health check on the Database layer. Add it as health within the weighted routing policy in both regions

E Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both eu-central-1 and us-east-1 and set a weight of 100 for the record pointing to primary in both eu-central- and us east-1 and set a weight of 100 for the primary Application Load balancer only in the region that has health resources.

A

A- Specify a weight of 100 for the record pointing to the primary Application load Balancer in us east-1 and a Weight of 60 for the record pointing to the primary Application Load balancer in EU-central -1.

B- Specify a weight of 100 for the record pointing to the primary Application Load balancer in us-east-1 and weight of 0 for the record pointing to the primary Application Load balancer in EU-central-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q165

The CISO of a large enterprise with multiple IT departments, each with its own AWS account, wants one central place where AWS permission for users can be managed and users authentication credentials can be synchronized with the company existing on-premises solution. Which solution will meet the CISO requirements?

A- Define AWS IAM roles based on the functional responsibilities of the users in a central acct. Create SAML- based identity management provider Map user in the on-Premises groups to IAM roles. Establish a trust relationship between the other accounts and the central account.

B- Deploy a common set of AWS IAM users group roles and policies in all the AWS accounts using AWS Organizations. Implement federation between the on-premises identity provider and the AWS accounts.

C- Use AWS Organization in a centralized account to define service control policies (SCP)s Create a SAML - based identity management provider in each account and map users in the on-premises groups to AWS IAM roles

D- Perform a thorough analysis of the user base and create AWS IAM user accounts that have the necessary permissions. Set a process to provision and de-provision accounts based on data in the on-premises solution.

A

A- Define AWS IAM roles based on the functional responsibilities of the users in a central acct. Create SAML- based identity management provider Map user in the on-Premises groups to IAM roles. Establish a trust relationship between the other accounts and the central account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q164

What combination of steps could Solution Architect take to protect a Web workload running on Amazon EC2 from DDoS application layer attacks? Select two

A- Put the EC2 instances behind a Network Load balancer and configure AWS WAF on it.

B- Migrate the DNS to Amazon Route 53 and use the AWS shield.

C- Put the EC2 Instances in an Auto Scaling group and configure AWS WAF on it.

D- Create and use an Amazon CloudFront distribution and configure AWS WAF on it.

E- Create and use an internet gateway in the VPC and AWS Shield.

A

B- Migrate the DNS to Amazon Route 53 and use the AWS shield.

D- Create and use an Amazon CloudFront distribution and configure AWS WAF on it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q163

Solution Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 Instances.

The solution architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must ensure that the recovered instances maintain the same IP address.

How can these requirements be met?

A- Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly

B- Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1

C- Create a new t2.micro instance to monitor the cluster instances.Configure the t2-micro instance to issue an aws ec2 reboot-instances command upon Failure

D- Create an Amazon CloudWatch alarm for the StatusCheckFailed_system metric, and then configure an Ec2 action to recover the instance.

A

D- Create an Amazon CloudWatch alarm for the StatusCheckFailed_system metric, and then configure an Ec2 action to recover the instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q162

A company is currently using AWS code commit for its source control and AWS Codepipeline for continuous integration. The pipeline has a build stage for building the artifacts, which is then staged in an Amazon S3 bucket.

The company has identified various improvement opportunities in the existing process, and the solution architect has been given the following requirements.

Create a new pipeline to support feature development support feature development without impacting production applications.

Incorporate continuous testing with unit tests
Isolate development and production artifacts
Support the capability to manage rested code into production code.

How should the solution Architect achieve these requirements?

A- Trigger a separate pipeline from CodeComit feature branches. Use AWS Codebuild for running unit tests. Use code build to stage the artifacts within an S3 bucket in a separate testing account

B- Trigger a separate pipeline from CodeCommit feature branches. Use AWS Lambda for running unit tests. Use AWS Codedeploy to stage the artifacts within an s3 bucket ina separate testing Account.

C Trigger a separate pipeline from Codecommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with 53 as the target for staging the artifacts within an S3 bucket in a separate testing account

D- Create a separate CodeCommit repository for development and use it to trigger the pipeline. Use AWS lambda for running unit tests. Use AWS code build to stage the artifacts within different S3 Buckets in the same production account.

A

A- Trigger a separate pipeline from CodeComit feature branches. Use AWS Codebuild for running unit tests. Use code build to stage the artifacts within an S3 bucket in a separate testing account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q161

A company has an internal AWS Elastic Beanstalk worker environment inside a VPC that must access an external payment gateway API available on an HTTPS endpoint on the public internet. Because of security policies. the payment gateway’s Application team can grant access to only one public IP address. Which architecture will set up an Elastic beanstalk environment to access the company application without multiple changes on the company ends?

A- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the payment gateway application side.

B- Configure the Elastic bean Stalk application to place Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the payment gateway application side.

C- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a private subnet.Set an HTTPS_PROXY application parameter to send outbound HTTPS Connection to an EC2 Proxy server deployed in a public subnet. Associate an Elastic IP address to the Ec2 proxy host that can be whitelisted on the payment gateway application side.

D- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a public subnet. Set the HTTPS PROXY and NO PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 Proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 Proxy host that can be whitelisted on the payment gateway application side.

A

C- Configure the Elastic Beanstalk application to place Amazon Ec2 instances in a private subnet. Set an HTTPS_PROXY application parameter to send outbound HTTPS Connection to an EC2 Proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the payment gateway application side.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q160

A company must deploy multiple independent instances of an application. The front-end application is internet accessible. However, the corporate policy stipulates that the backends are to be isolated from each other and the internet yet accessible from a centralized administration server. The application setup should be automated to minimize the opportunity for mistakes as new instances are deployed.

Which option meets the requirements and minimizes costs?

A- Use an AWS cloud formation template to create identical IAM roles for each region. Use the AWS CloudFormation template to create identical IAM roles for each region. Use AWS cloud formation Stack sets to deploy each application instances by using parameters to customize for each instance, and use security groups to isolate each instance while permitting access to the central server.

B - Create each instance of the application IAM roles and responses in separate accounts by using AWS CloudFormation StackSets.Include a VPN connection to the VPN gateway of the central administration server.

C- Duplicate the application IAM roles and resources in separate accounts by using a single AWS cloud formation template. Include VPC peering to connect the VPC of each application instance to a central VPC.

D- Use the parameters of the AWS CloudFormation templates to customize the deployment into separate accounts. Include a NAT gateway to allow communication back to the central administration server.

A

D- Use the parameters of the AWS CloudFormation templates to customize the deployment into separate accounts. Include a NAT gateway to allow communication back to the central administration server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q159
A company has deployed an application to multiple environments in AWS.including production and testing. The company has separate accounts for production and testing and users are allowed to create additional application users for team members or services, as needed. The Security team has asked the Operation team for better isolation between production and testing with centralized controls on security credentials and improved management of permissions between environments.

Which of the following option would MOST securely accomplish this goal?

A- Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.

B- Modify permission in the production and testing accounts to limit creating new IAM users to members of the Operation team. Set a strong IAM password policy on each account. Create new IAM users and groups in each account to limit developer access to just the services required to complete their job functions.

C- Create a script that runs on each account that checks the user accounts for adherence to a security policy.Disable any user on service accounts that do not comply

D- Create all user accounts in the production account, Create roles for access in the production account and testing accounts. Grant cross-account access from the production account to the testing accounts.

A

A- Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q158.

A company has an Amazon EC2 deployment that has the following architecture, An application tier that contains 8 m4.xlarge instances.

A Classic Load Balancer
Amazon S3 as a persistent data store
After one of the Ec2 Instances fails, users report very slow processing of their requests. A Solution Architect must recommend design changes to maximize system reliability. The solution must minimize costs.

What should the solution Architect recommend?

A- Migrate the existing EC2 Instances to a serverless deployment using AWS Lambda functions.

B- Change the Classic Load Balancer to an Application Load Balancer

C- Replace the Application tier with m4 large instances in an Autoscaling group

D- Replace the application tier with 4 M4 2xlarge instances.

A

C- Replace the Application tier with m4 large instances in an Autoscaling group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Q157,

A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 MPBS AWS Direct Connect link and a Separate, fully available 1 Gbps ISP connection. A Solution Architect must transfer 20 TB of data from the data center to an amazon s3 bucket.

What is the fastest way to transfer the data?

A- upload the data to the S3 bucket using the existing DX link

B- Send the data to AWS using the AWS Import/Export Service

C- Upload the data using an 80 TB AWS Snowball device

D - Upload the data to the S3 Bucket using S3 Transfer Acceleration

A

A- upload the data to the S3 bucket using the existing DX link

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q156,

A company is running an email application across multiple AWS Regions. The company uses Ohio (us-east-2) as the primary Region and northern Virginia (us-east-1) as the Disaster Recovery (DR)Region. The data is continuously replicated from the primary region to the DR Region by a single instance on the public subnet in both Regions. The replication messages between the regions have a significant backlog during certain times of the day. The backlog cleans on its own after a short time, but it affects the application RPO.
Which of the following Solution should help remediate this performance problem?
(select 2)

A- Increase the size of the instances

B- Have the instance in the primary Region write the data to an Amazon SQS queue in the primary region instead and have the instance in the DR Region poll from this queue

C- Use multiple instances on the primary and DR Region to send and receive the replication data

D- Change the DR Region to Oregan(us-west-2) instead of the current DR Region.

A

B- Have the instance in the primary Region write the data to an Amazon SQS queue in the primary region instead and have the instance in the DR Region poll from this queue

C- Use multiple instances on the primary and DR Region to send and receive the replication data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q155-
A hybrid network architecture must be used during a company multi-year data center migration from multiple private data centers to AWS. The current data centers are linked together with private fiber. Due to unique legacy applications.NAT cannot be used. During the migration period, many applications will need access to other applications in both the data centers and AWS.

Which option offers a hybrid network architecture that is secure and highly available, that allows for high bandwidth and a multi-region deployment post-migration?

A-Use AWS Direct Connect to each data center from different ISPs and configure routing to failover to the other data center Direct Connect if one fails. Ensure that no VPC CIDR blocks overlap with one another or the On-prem network.

B- Use multiple hardware VPN connections to AWS from the on- Prem data center. Route different subnet traffic through different VPN connections. Ensure that no VPC CIDR blocks overlap one another or the on-premises network.

C- Use AWS Direct Connect and a VPN as a backup, and Configure both to use the same virtual private gateway and BGP, Ensure that no VPC CIDR blocks overlap one another or the on-premises network.

A

A-Use AWS Direct Connect to each data center from different ISPs and configure routing to failover to the other data center Direct Connect if one fails. Ensure that no VPC CIDR blocks overlap with one another or the On-prem network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q154-
An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business logic, and a database tier for the user and transactional data management. The database server has a 100 GB memory requirements. The business requires cost-efficient disaster recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The Business also has a regulatory requirement for out of region disaster recovery with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the solution Architect design to create a comprehensive solution for this customer that meets the DR requirements?

A- Back up the application and database data frequently and copy them to Amazon S3.Replicate the backup using S3 cross-region replication and use AWS CloudFormation to instantiate infrastructure for disaster recovery and restore data from Amazon S3

B- Employ a pilot light environment in which the primary database is configured with mirroring to build a standby database on m4 large in the alternate region. Use AWS CloudFormation to instantiate the web servers, application server and load balancers in case of a Disaster to bring the application up in the alternate region, vertically resize the database to meet the full production demands and use Amazon route 53 to switch traffic to the alternate region.

C- Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the webserver, one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the application tiers in an Auto Scaling group behind a load balancer, which can automatically scale when the load arrives to the application. Use Amazon Route 53 to switch traffic to the alternate region.

A

C- Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the webserver, one instance of the application server, and a replicated instance of the database server in standby mode. Place the web and the application tiers in an Auto Scaling group behind a load balancer, which can automatically scale when the load arrives to the application. Use Amazon Route 53 to switch traffic to the alternate region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Q153-
An E-commerce company is revamping its IT Infrastructure and is planning to use AWS services. The Company CIO has asked a Solution Architect to design a simple, highly available, and loosely coupled order processing application. The Application is responsible for receiving and processing orders before storing them in an Amazon Dynamo DB table. The application has a sporadic traffic pattern and should be able to scale during marketing campaigns to process the orders with minimal delays

Which of the following is the MOST reliable approach to meet the requirements?

A- Receive the orders in an Amazon EC2 hosted database and use EC2 Instances to process them.
B- Recieve the orders in the Amazon SQS queue and Trigger an AWS Lambda function to process them.
C- Recieve the order using the AWS step Functions program and trigger an Amazon ECS container to process them.
D- Revieve the order in Amazon Kinesis Data Streams and Use Amazon EC2 Instances to process them

A

B- Recieve the orders in the Amazon SQS queue and Trigger an AWS Lambda function to process them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Q152.
A company CFO recently analyzed the company’s AWS monthly bill and identified an opportunity to reduce the cost for AWS Elastic Beanstalk environments in use. The CFO has asked a Solutions Architect to design a highly available solution that will spin up an Elastic Beanstalk environment in the
morning and terminate it at the end of the day.
The solution should be designed with minimal operational overhead and to minimize costs. It should also
be able to handle the increased use of Elastic Beanstalk environments among different teams, and must
provide a one-stop scheduler solution for all teams to keep the operational costs low.

What design will meet these requirements?

A. Set up a Linux EC2 Micro instance. Configure an IAM role to allow the start and stop of the Elastic Beanstalk environment and attach it to the instance. Create scripts on the instance to start and stop the Elastic Beanstalk environment. Configure Cron jobs on the instance to execute the scripts.

B. Develop AWS Lambda functions to start and stop the Elastic Beanstalk environment. Configure a Lambda execution role granting Elastic Beanstalk environment start/stop permissions, and assign the role to the Lambda functions. Configure Cron expression Amazon CloudWatch Events rules to trigger the
Lambda functions.

C. Develop an AWS Step Functions state machine with ·wait” as its type to control the start and stop time.
Use the activity task to start and stop the Elastic Beanstalk environment. Create a role for Step Functions
to allow it to start and stop the Elastic Beanstalk environment. Invoke Step Func!ions daily.

D. Configure a time-based Auto Scaling group. In the morning, have the Auto Scaling group scale up an
Amazon EC2 instance and put the Elastic Beanstalk environment start command in the EC2 instance
user data. Al the end of the day, scale down the instance number to Oto terminate the EC2 instance.

A

B. Develop AWS Lambda functions to start and stop the Elastic Beanstalk environment. Configure a Lambda execution role granting Elastic Beanstalk environment start/stop permissions, and assign the role to the Lambda functions. Configure Cron expression Amazon CloudWatch Events rules to trigger the
Lambda functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Q 151

A company has an Amazon VPC that is divided into a public subnet and a private subnet. A web
application runs in Amazon VPC. and each subnet has its own NACL The public subnet has a CIDR of
10.0.0.0/24. An Application Load Balancer is deployed to the public subnet. The private subnet has a
CIOR of 10.0.1.0/24. Amazon EC2 instances that run a web server on port 80 are launched into the
private subnet. Only network traffic that is required for the Application Load Balancer to access the web
application can be allowed to travel between the public and private subnets.

What collection of rules should be written to ensure that the private subnet·s NACL meets the
requirement? (Select TWO.)
A. An inbound rule for port 80 from source 0.0.0.0/0
B. An inbound rule for port 80 from source 10.0.0.0/24
C. An outbound rule for port 80 to destination 0.0.0.0/0
D. An outbound rule for port 80 to destination 10.0.0.0/24
E. An outbound rule for ports 1024 through 65535 to destination 10.0.0.0/24

A

B. An inbound rule for port 80 from source 10.0.0.0/24

D. An outbound rule for port 80 to destination 10.0.0.0/24

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Q 150 A solution Architect is redesigning an image viewing and messaging platform to be delivered as SaaS Currently there is a farm of virtual desktop infrastructure (VDI )that runs a desktop image viewing.

application and a desktop messaging application Both applications use a shared database to manage
user accounts and sharing Users log in from a web portal that launches the applications and streams the
view of the application on the user’s machine. The Development Operations team wants to move away
from using VD! and wants to rewrite the application
What is the MOST cost-effective architecture that offers both security and ease of management?
A. Run a website from an Amazon S3 bucket with a separate S3 bucket for Images and messaging data. Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for a user and sharing management.
B. Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3. and use Amazon
Cognito for user accounts and sharing Create AWS CloudFormation templates to launch the application
by using EC2 user data to install and configure the application.
C. Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using
an Amazon RDS database for user accounts and sharing Create AWS CloudFormation templates to
launch the application and perform blue-green deployments.
D. Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and messenger that stores images in Amazon S3 Have the website use an Amazon RDS database for user accounts and sharing

A

A. Run a website from an Amazon S3 bucket with a separate S3 bucket for Images and messaging data. Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for a user and sharing management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Q149

A company stores sales transactions in data in amazon Dynamo DB tables. To detect anomalies behavior and respond quickly all changes to the items stored in the Dynamo DB tables must be logged within 30 minutes.

Which solution meets the requirements?

A- Copy the DynamoDB tables into Apache Hive tables on Amazon EMPR every hour and analyze them for anomalous behaviors. Send Amazon SNS notification when anomalous behavior is detected
B- Use AWS cloud Trail to capture all the APIs that change the Dynamo DB tables. Send SNS notification when anomalous behavior are detected using Cloud Trail filtering.
C-Use Amazon Dynamo DB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notification when anomalous are detected.

A

C-Use Amazon Dynamo DB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notification when anomalous are detected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Q148. To abide by industry regulations, a Solutions Architect must design a solution that will store a company”s
critical data in multiple public AWS Regions. including In the United States. where the company”s headquarters is located. The Solutions Architect is required to provide access to the data stored in AWS to the company”s global WAN network. The Security team mandates that no traffic accessing this data
should traverse the public internet.

How should the Solutions Architect design a highly available solution that meets the requirements and is
cost-effective?
A. Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in
use.
Use the company WAN to send traffic over to the headquarters and then to the respective DX connection
to access the data.
B. Establish sh two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send traffic over a DX connection. Use inter-region VPC peering to access the data in other AWS regions.
C. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region.
Use the company WAN to send traffic over a DX connection Use an AWS Transit VPC solution to access
data in other AWS Regions.
D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions

A

D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region Use the company WAN to send traffic over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company wants to host its website on AWS using server1ess architecture design patterns for global
customers.

The company has its requirements as follows:

  • The website should be responsive
  • The website should offer minimal latency
  • The website should be highly available
  • Users should be able to authenticate through social identity providers such as Google. Facebook. and Amazon
  • There should be baseline DDoS protections for spikes in traffic

How can the design requirements be met?

A. Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Seuets Manager to
provide user management and authentication functions. Use ECS Docker containers to build an API.

B. Use Amazon Route 53 latency routing with an Application load balancer and AWS Fargate in different
regions for hosting the website. Use Amazon Cognito to provide user management and authentication
functions. Use Amazon EKS containers to build an API

C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and authentication functions. Use Amazon API Gateway with AWS Lambda to build an API

D. Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resources.
Use Amazon Cognito to provide-user management and authentication functions Use AWS Lambda to
build an API

A

C. Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and authentication functions. Use Amazon API Gateway with AWS Lambda to build an API

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Q146. A company has multiple AWS accounts hosting IT applications. An Amazon CloudWatch Logs agent is
installed on all Amazon EC2 instances. The company wants to aggregate all security events in a centralized AWS account dedicated to tog storage Security Administrators need to perform near-real-time gathering and correlating of events across multiple AWS accounts. Which solution satisfies these requirements?

A. Create a Log Audit IAM role ·in each application AWS account with permissions to view CloudWatch
Logs, configure an AWS Lambda function to assume the Log Audit role. and perform an hourly export of
CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account

B. Configure CloudWatch Logs streams in each application AWS account to forward events to
CloudWatch Logs in the logging AWS account. In the logging AWS account, subscribe an Amazon
Kinesis Data Firehose stream to Amazon CloudWatch Events and use the stream to persist log data in
Amazon S3

C. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source and persist the log data in an Amazon S3 bucket inside the
logging AWS account

D. Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the
logging AWS account, use an AWS Lambda function to read messages from the stream and push
messages to Data Firehose, and persist the data in Amazon S3

A

C. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source and persist the log data in an Amazon S3 bucket inside the
logging AWS account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Q145 A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket.
The company requires that only authenticated users are allowed to. post content. The application generates a pre-signed URL that is used to upload objects through a browser interface. Most users are reporting slow upload limes for objects larger than 1 DO MB. What can a Solutions Architect, do to improve the performance of these uploads while ensuring only authenticated users are allowed to post content?

A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy. Configure the PUT method for this resource to expose the ·s3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS authorizer. Have the.browser interface use API.Gateway instead of the pre-signed URL to upload objects

B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service
proxy. Configure the PUT method for this resource to eXP.OS.e the S3 PulObject operation. Secure the API
Gateway using an AWS Lambda authorizer. Have the browser interface use API Gateway instead of the
pre-signed URL to upload objects

C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the
pre-signed URL. Have the browser interface upload the objects lo this URL using the S3 multiple upload API

D. Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT and POST
methods for the CloudFront cache behavior. Update the CloudFront origin to use an origin access identity
(OAI), Give the OAI user s3: Put object permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution

A

C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when generating the
pre-signed URL. Have the browser interface upload the objects lo this URL using the S3 multiple upload API

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Q144 A company needs to cost-effectively persist small data records (up-to 1 KB) for up to 30 days.

The data is read rarely. When reading the data, a 5 -minute delay is acceptable. Which of the following solutions achieve this goal? (Select TWO.)

A. Use Amazon S3 to collect multiple records in one S3 object. Use a lifecycle configuration to move data to Amazon Glacier immediately. after write. Use expedited retrievals when reading the data.

B. Write the records to AWS Kinesis Data Firehose and configure Kinesis Data Firehose to deliver the
data to Amazon S3 after 5 minutes. Set an expiration action at 30 days on the S3 bucket.

C. Use an AWS Lambda function invoked via Amazon API Gateway to collect data for 5 minutes. Write data to Amazon S3 just before the Lambda execution stops

D. Write the records to Amazon DynamoDB configured with a Time To Live (TTL) of 30 days. Read data using the Get item or Batch Get-Item call.

E. Write the records to an Amazon ElastiCache to Redis Configure the Redis append-only file (AOF)
persistence logs to write to Amazon S3. Recover from the log if the ElastiCache instance has failed.

A

A. Use Amazon S3 to collect multiple records in one S3 object. Use a lifecycle configuration to move data
to Amazon Glacier immediately. after write. Use expedited retrievals when reading the data.

C. Use an AWS Lambda function invoked via Amazon API Gateway to collect data for 5 minutes. Write
data to Amazon S3 just before the Lambda execution stops

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q
  1. A company manages more than 200 separate internet-facing web applications. All of the applications are
    deployed to AWS in a single AVVS Region. The fully qualified domain names (FQDNS) of all of the applications are made available through HTTPS using Application Load Balancers (ALBs). The ALBs are configured to use public SSL/TLS certificates

A Solutions Architect needs to migrate the web applications to a multi-region architecture. All HTTPS
services should continue to work without interruption
Which approach meets these requirements?
A . Request a certificate for each FQDN using AWS K M S. Associate the certificates with the ALBS in the
primary AWS Region. Enable cross-region availability in AWS KMS for the certificates and associate the
certificates with the ALBs in the secondary AWS Region.
B. Generate the key pairs and certificate requests for each FQDN using AWS KMS. Associate the
certificates with the ALBs in both the primary and secondary AWS Regions.
C. Request a certificate for each FQDN using AWS Certificate Manager. Associate the certificates with
the ALBs in both the primary and secondary AWS Regions.
D. Request certificates for each FQDN in both the primary and secondary AWS Regions using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in each AWS Region.

A

D. Request certificates for each FQDN in both the primary and secondary AWS Regions using AWS Certificate Manager. Associate the certificates with the corresponding ALBs in each AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Q142. A company’s data center is connected to the AWS Cloud over a minimally used 10-Gbps AWS Direct
Connect connection with a private virtual interface to its virtual private cloud (VPC). The company internet
connection is 200 Mbps, and the company has a 150-TB dataset that is created each Friday. The data
must be transferred and available in Amazon S3 on Monday morning.

Which is the LEAST expensive way to meet the requirements while allowing for data transfer growth?

A. Order two 80-GB AWS Snowball appliances. Offload the data to the appliances and ship them to AWS.
AWS will copy the data from the Snowball appliances to Amazon S3

B. Create a VPC endpoint for Amazon S3. Copy the data to Amazon S3 by using the VPC endpoint,
forcing the transfer to use the Direct Connect connection.

C. Create a VPC endpoint for Amazon S3. Set up a reverse proxy farm behind a Classic Load Balancer in
the VPC. Copy the data to Amazon S3 using the proxy.

D. Create a public virtual interface on a Direct Connect connection and copy the data to Amazon S3 over
the connection.

A

D. Create a public virtual interface on a Direct Connect connection and copy the data to Amazon S3 over
the connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Q141 - An organization has a write-intensive mobile application that uses the Amazon API Gateway. AWS Lambda, and Amazon DynamoDB.

The application has scaled well; however, costs have increased exponentially because of higher than anticipated Lambda costs.

The application’s use is unpredictable. but there has been a steady 20% increase in utilization every month.

While monitoring the current Lambda functions, the Solutions Architect notices that the execution-time
averages 4.5 minutes. Most of the wait time is the result of a high-latency network call to a 3-TB MySQL database server that is on-premises.

A VPN is used to connect to the VPC, SO the Lambda functions have been configured with a five-minute timeout.

How can the Solutions Architect reduce the cost of the current architecture?

A- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.

  • Enable local caching in the mobile application to reduce the Lambda function invocation calls.
  • Monitor the Lambda function performance, gradually adjust tf\e timeout and memory properties to lower values while maintaining an acceptable execution time.
  • Offload the frequently accessed records from DynamoDB to Amazon ElastiCache
  • Monitor the Lambda function performance, gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
  • Offload the frequently accessed records from DynamoDB to Amazon ElastiCache

B - Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database. Cache the API Gateway results to Amazon CloudFront. Use Amazon EC2 Reserved Instances instead of Lambda Enable Auto Scaling on EC2 and use Spot Instances during peak times. Enable DynamoDB Auto Scaling to manage target utilization.

C- Migrate the MySQL database server into a Mum-AZ Amazon RDS for MySQL. Enable caching or the Amazon API Gateway results In Amazon CloudFront to reduce the number of Lambda function invocations. Monitor the Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
Enable DynamoDB Accelerator for frequently accessed records, and enable the DynamoDB Auto Scaling feature

D- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable API caching on API Gateway to reduce the number of Lambda function invocations. Continue to monitor the AWS Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time. Enable Auto Scaling in DynamoDB.

A

D-

Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable API caching on API Gateway to reduce the number of Lambda function invocations.

Continue to monitor the AWS Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
Enable Auto Scaling in DynamoDB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Q140. A Solutions Architect must create a cost-effective backup solution for a company”s 500MB source code
repository of proprietary and sensitive applications. The repository runs on Linux and backs up daily to
tape. Tape backups are stored for 1 year.
The current solution is not meeting the company’s needs because it is a manual process that is prone to
error, expensive to maintain, and does not meet the need for a Recovery Point Objective (RPO) of 1 hour
or Recovery Time Objective (RTO) of 2 hours. The new disaster recovery requirement is for backups to
be stored offsite and to be able to restore a single file if needed.
Which solution meets the customer’s needs for RTO, RPO, and disaster recovery with the LEAST effort
and expense?

A. Replace local tapes with an AWS Storage Gateway virtual tape library to integrate with current backup
software. Run backups nightly and store the virtual tapes on Amazon S3 standard storage in US-EAST-1
Use cross-region replication to create a second copy in US-WEST-2. Use Amazon S3 lifecycle policies
to perform automatic migration to Amazon Glacier and deletion of expired backups after 1 year.

B - Configure the local source code repository to synchronize files to an AWS Storage Gateway file
Amazon gateway to store backup copies in an Amazon S3 Standard bucket. Enable versioning on the
Amazon S3 bucket Create Amazon S 3 lifecycle policies to automatically migrate old versions of objects to
Amazon S3 Standard - Infrequent Access, then Amazon Glacier, then delete backups after 1 year.

C. Replace the local source code repository storage with a Storage Gateway stored volume. Change the
default snapshot frequency to 1 hour. Use Amazon S3 lifecycle policies to a.rchive snapshots to Amazon
Glacier and remove old snapshots after 1 year. Use cross-region replication to create a copy of the
snapshots in US-WEST-2.

D. Replace the local source code repository storage with a Storage Gateway cached volume. Create a
snapshot schedule to take hourly snapshots. Use an Amazon CloudWatch Events schedule expression
rule to run an hourly AWS Lambda task to copy snapshots from US-EAST-1 to US-WEST-2

A

A. Replace local tapes with an AWS Storage Gateway virtual tape library to integrate with current backup
software. Run backups nightly and store the virtual tapes on Amazon S3 standard storage in US-EAST-1
Use cross-region replication to create a second copy in US-WEST-2. Use Amazon S3 lifecycle policies
to perform automatic migration to Amazon Glacier and deletion of expired backups after 1 year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Q139. A company is running multiple applications on Amazon EC2. Each application is deployed and managed
by multiple business units. All applications are deployed on a single AWS account but on different virtual
private clouds (VPCs). The company uses a separate VPC in the same account for test and development
purposes.
Production applications suffered multiple outages when users accidentally terminated and modified
resources that belonged to another business unit. A Solutions Architect has been asked to improve the availability of the company applications while allowing the Developers access to the resources they need.
Which option meets the requirements with the LEAST disruption?
A. Create an AWS account for each business unit. Move each business unit instances to its own
account and set up a federation to allow users to access their business unit”s account.
B. Set up a federation to allow users to use their corporate credentials and lock the users down to their
own VPC. Use a network ACL to block each VPC from accessing other VPCS.
C. Implement a tagging policy based on business units. Create an IAM policy so that each user can terminate instances belonging to their own business units only.
D. Set up role-based access for each user and provide limited permissions based on individual roles and
the services for which each user is responsible.

A

C. Implement a tagging policy based on business units. Create an IAM policy so that each user can terminate instances belonging to their own business units only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q
  1. A company with several AWS accounts is using AWS Organizations and se.rvice control policies (SCPs).
    An Administrator created the following SCP and has attached it to an organizational unit (OU) that contains AWS account 1111-1111-1111:

Developers working in account 1111-111 1-1111 complain that they cannot create Amazon S3 buckets.

How should the Administrator address this problem?

A. Add s3: CreateBucket with the “Allow” effect to the SCP.
B. Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111
C. Instruct the Developers to add Amazon S3 permissions to their IAM entities.
D Remove the SCP from account 1111-1111-1111

A

C. Instruct the Developers to add Amazon S3 permissions to their IAM entities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q
  1. A company wants to move a web application to AWS. The application stores session information locally
    on each web server. which will make auto-scaling difficult. As part of the migration. the application will be
    rewritten to decouple the session data from the web servers. The company requires low latency,
    scalability, and availability
    Which service will meet the requirements for storing the session information in the MOST cost-effective
    way?

A. Amazon ElastiCache with the Memcached engine
B. Amazon S3
C. Amazon RDS MySQL
D. Amazon ElastiCache with the Redis engine

A

C. Amazon RDS MySQL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

0136.
A company is running a high-user-volume media-sharing application on-premises. It currently hosts about
400 TB of data with millions of video files. The company is migrating this application to AWS to improve
reliability and reduce costs. The Solutions Architecture team plans to store the videos in an Amazon S3 bucket and use Amazon CloudFront to distribute video users. The company needs to migrate this application to AWS within 10
days with the least amount of downtime possible. The company currently has 1 Gbps connectivity to the internet with 30 percent free capacity. Which of the following solutions would enable the company to migrate the workload to AWS and meet all of the requirements?

A. Use a multi-part upload in Amazon S3 client to parallel upload the data to the Amazon S3 bucket over
the internet. Use the throttling feature to ensure that the Amazon S3 client does not use more than 30
percent of available internet capacity

B. Request an AWS Snowmobile with 1 PB capacity to be delivered to the data center. Load the data into
Snowmobile and send it back to have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight.

C. Use an Amazon S3 client to transfer data from the data center to the Amazon S3 bucket over the
internet. Use the throttling feature to ensure the Amazon S3 client does not use more than 30 percent of
available internet capacity

D. Request multiple AWS Snowball devices to be delivered to the data center Load the data concurrently into these devices and send it back. Have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight.

A

D. Request multiple AWS Snowball devices to be delivered to the data center Load the data concurrently
into these devices and send it back. Have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Q135. An organization has recently grown through acquisitions. Two of the purchased companies use the same
IP CIDR range. There is a new short-term requirement to allow Any Company A (VPC-A) to communicate with a server that has the IP address 10.0.0. 77 in AnyCompany B (VPC-B). Any Company A must also communicate with all resources in Any Company C (VPC-C). The Network team has created the VPC
peer links, but it is having issues with communications between VPC-A and VPC-B. After an investigation, the team believes that the routing tables in the VPCs are incorrect.

What configuration will allow Any Company A to communicate with Any Company C in addition to the
database in Any Company B?

A. On VPC-A, create a static route for the VPC-B CIDR range (10.0.0.0/24) across VPC peer pcx-AB.

Create a static route of 10.0.0.0/16 across VPC peer pcx-AC O n V􀂱C-B. create a static route for VPC-A
CDJ R (172.16.0.0/24) on peer pcxAB On VPC·C. create a static route'for VPC-A CIDR (172.16.0.0/24)
across peer pcx-AC.

B. On VPC-A, enable dynamic route propagation on pcx-AB and pox-AC. On VPC-B, enable dynamic
route propagation and use security groups to allow only the IP address 10.0.0.77/32 on VPC peer pcxAB.
On VPC·C, enable dynamic route propagation with VPC·A on peer pcx-AC.

C. On VPC-A, create network access control lists that block the IP address 10.0.0.77/32 on VPC peer

VPC-C CIDR (10.0.0.0/24) on pcx-AC. On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24)
on peer pcx. On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC

D. On VPC-A, create a static route for the VPC·B (10.0.0.77/32) database across VPC peer pcx-AB.
Create a static route for the VPC-C CIDR on VPC peer pcx-AC. On VPC-B, create a static route for VPCA
CIDR (172.16.0.0/24) on peer pcxAB. On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC.

A

D. On VPC-A, create a static route for the VPC·B (10.0.0.77/32) database across VPC peer pcx-AB.
Create a static route for the VPC-C CIDR on VPC peer pcx-AC. On VPC-B, create a static route for VPCA
CIDR (172.16.0.0/24) on peer pcxAB. On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC

34
Q

0134.
A company wants to ensure that the workloads for each of its business units have complete autonomy
and a minimal blast radius in AWS. The Security team must be able to control access to the resources
and services in the account to ensure that particular services are not used by the business units.
How can a Solutions Architect achieve the isolation requirements?

A. Create individual accounts for each business unit and add the account to an OU in AWS Organizations.
Modify the OU to ensure that the particular services are blocked Feoerate each account with an ldP, and
create separate roles for the business units and the Security team.

B. Create individual accounts for each business unit. Federate each account with an ldP, and create
separate roles and policies for the business units and the Security team

C. Create one shared account for the entire company. Create separate VPCs for each business unit.
Create individual IAM policies and resource tags for each business unit. Federate each account with an
ldP, and create separate roles for the business units and the Security team.

D. Create one shared account for the entire company. Create individual IAM policies and resource tags
for each business unit. Federate the account with an ldP, and create separate roles for the business units
and the Security team

A

A. Create individual accounts for each business unit and add the account to an OU in AWS Organizations.
Modify the OU to ensure that the particular services are blocked Feoerate each account with an ldP, and create separate roles for the business units and the Security team.

35
Q

0133.
A company operates a group of imaging satellites. The satellites stream data to one of the company’s ground stations where processing creates about 5 GB of images per minute. This data is added to network-attached storage, where 2 PB of data is already stored. The company runs a website that allows its customers to access and purchase the images over the
internet. This website is also running in the ground station.

Usage analysis shows that customers are
most likely to access images that have been captured in the last 24 hours.

The company would like to ll).i1;1ral’Ei’the image storage and distribution system to AWS to reduce costs and increase the number of c14stolners that can be served.
Which AWS architecture and migration strategy will meet these requirements?

A. Use multiple AWS Snowball appliances to migrate the existing imagery lo Amazon S3. Create a 1-Gb
AWS Direct Connect connection from the ground station to AWS and upload new data to Amazon S3
through the Direct Connect connection. Migrate the data distribution website to Amazon EC2 instances.
By using Amazon S3 as an origin, have this website serve the data through Amazon CloudFront by
creating signed URLs.

B. Create a 1 -Gb Direct Connect connection from the ground station to AWS. Use the AWS Command
Line Interface to copy the existing data and upload new data to Amazon S3 over the Direct Connect
connection. Migrate the data distribution website to EC2 instances. By using Amazon S3 as an origin,
have this website serve the data through CloudFront b y creating signed URLs.

C. Use multiple Snowball appliances to migrate the existing images to Amazon S3. Upload new data by
regularly using Snowball appliances to upload data from the network-attached storage. Migrate the data
distribution website to EC2 instances. By using Amazon S3 as an origin, have this website serve the data
through CloudFront by creating signed URLs

D. Use multiple Snowball appliances to migrate the existing images to an Amazon EFS file system.
Create a 1-Gb Direct Connect connection from the ground station to AWS and upload new data by
mounting the EFS file system over the Direct Connect connection. Migrate the data distribution website to
EC2 instances. By using webservers in EC2 that mount the EFS file s9stem as the origin. have this
website serve the data through CloudFront by creating signed URLS

A

A. Use multiple AWS Snowball appliances to migrate the existing imagery lo Amazon S3. Create a 1-Gb AWS Direct Connect connection from the ground station to AWS and upload new data to Amazon S3 through the Direct Connect connection. Migrate the data distribution website to Amazon EC2 instances.
By using Amazon S3 as an origin, have this website serve the data through Amazon CloudFront by
creating signed URLs.

36
Q

Q132.A company is migrating an application to AWS. It wants to use fully managed services as much as
possible during the migration. The company needs to store large, important documents within the
application with the following requirements:

  1. The data must be highly durable and available.
  2. The data must always be encrypted at rest and in transit.
  3. The encryption key must be managed by the company and rotated periodically
    Which of the following solutions should the Solutions Architect recommend?

A. Deploy the storage gateway to AWS in file gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes.

B. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce
server-side encryption and AWS KMS for object encryption

C. Use Amazon Dynamo DB with SSL to connect to DynamoDB. Use an AWS KMS key to encrypt
DynamoDB objects at rest.

D. Deploy instances with Amazon EBS volumes attached to store this data. Use EBS volume encryption using an AWS KMS key to encrypt the data.

A

A. Deploy the storage gateway to AWS in file gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes.

37
Q

Q131. A company has asked a Solutions Architect to design a secure content management solution that can be
accessed by API calls by external customer applications. The company requires that a customer administrator must be able to submit an API call and roll back changes to existing files sent to the content management solution, as needed.

What is the MOST secure deployment design that meets all solution requirements?

A. Use Amazon S3 for object storage with versioning and bucket access logging enabled, and an AM role and access policy for each customer application. Encrypt objects us1ng SSE-KMS. Develop the content management application to use a separate AWS KMS key for each customer.

B. Use Amazon WorkDocs for Object Storage. Leverage workdocs encryption. Use access management and version control. Use AWS Cloud Trail to log all SDK actions and create reports of hourly access by using Amazon Cloud Watch dashboard. Enable a revert function in the SDK baed on a static Amazon S3 web-page that shows the output of the cloud the Cloud Watch Dashboard

C. Use Amazon EFS for object storage, using encryption at rest for the Amazon EFS volume and a customer-managed key stored in AWS KMS. Use IAM roles and Amazon EFS access policies to specify separate encryption keys for each customer application. Deploy the content management application to
store all new versions as new files in Amazon EFS and use a control API to revert a specific file to a previous version.

D. Use Amazon S3 for object storage with versioning and enable S3 bucket access logging. Use an IAM role and access policy for each customer application. Encrypt objects using client-side encryption and distribute an encryption key to all customers when accessing the content management application.

A

B. Use Amazon Workdocs and version control. Use AWS Cloud Trail to log all SOK actions and create reports of hourly access by using the Amazon CloudWatch dashboard. Enable a revert function in the SOK based on a static Amazon S3 webpage that shows the output of the Cloudwatch dashboard

38
Q

Q130. A company uses Amazon S3 to store documents that may only be accessible to an Amazon EC2 an instance in a certain virtual private cloud (VPC). The company fears that a malicious insider with access to this instance could also set up an EC2 instance in another VPC to access these documents
Which of the following solutions will provide the required protection?

A. Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint

B. Use EC2 instance profiles and an S3 bucket policy to limit access to the role attached to the instance
profile.

C. Use S3 client-side encryption and store the key in the Instance metadata

D. Use S3 server-side encryption and protect the key with an encryption context.

A

A. Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint

39
Q

Q129. A company has an application that uses Amazon EC2 instances in an Auto Scaling group. The Quality
Assurance (QA) department needs to launch a large number of short-lived environments to test the
application. The application environments are currently launched by the Manager of the department using
an AWS CloudFormation template. To launch the stack, the Manager uses a role with permission to use
CloudFormation, EC2, and Auto Scaling APIs. The Manager wants to allow testers to launch their own
environments but does not want to grant broad permissions to each user.
Which set up would achieve these goals?
A. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department
permission to assume the Manager’s role and add a policy that, restricts the permissions to the template
and the resources it creates Train users to launch the template from the CloudFormation console.
B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the
product with the existing role. Give users in the QA department permission to use AWS Service Catalog
APIs only. Train users to launch the template from the AWS Service Catalog console.
C. Upload the AWS CloudFormation template to Amazon S3. Give users in the QA department
permission to use CloudFormation and S3 APls, with conditions that restrict the permissions to the
template and the resources it creates Train users to launch the template from the CloudFormation
console.
D. Create an AWS Elastic Beanstalk application from the environment template Give users in the QA
department permission to use Elastic Beanstalk permissions only. Train users to launch Elastic Beanstalk
environments with the Elastic Beanstalk CU, passing the existing role to the environment as a service
role.

A

B. Create an AWS Service Catalog product from the environment template. Add a launch constraint to the
product with the existing role. Give users in the QA department permission to use AWS Service Catalog
APIs only. Train users to launch the template from the AWS Service Catalog console.

40
Q

Q128. A company wants to allow its Marketing team to perform SOL queries on customer records to identify
market segments. The data is spread across hundreds of files. The records must be encrypted in transit and at rest. The Team Manager must have the ability to manage users and groups, but no team members should have access to services or resources not required for the SOL queries. Additionally,
Administrators need to audit the queries made and receive notifications when a query violates rules defined by the Security team. AWS Organizations has been used to create a new account and an AWS IAM user with administrator
permissions for the Team Manager. Which design meets these requirements?

A. Apply a service control policy (SCP) that allows access to IAM, Amazon RDS, and AWS Cloud Trail. Load customer records in Amazon RDS MySQL and train users to execute queries using the AWS CU. Stream the query logs to Amazon CloudWatch Logs from the RDS database instance. Use a subscription filter with AWS Lambda functions to audit and alarm on queries against personal data.

B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS Cloud Trail. Store customer record files in Amazon S3 and train users to execute queries using the CLI via Athena Analyze Cloud Trail events to audit and alarm on queries against
personal data.

C. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon DynamoDB, and AWS Cloud Trail. Store customer records in DynamoDB and train users to execute queries using the AWS CU. Enable DynamoDB streams to track the queries that are issued and use an AWS Lambda function for real-ti111.e monitoring and alerting

D. Apply a service control policy (SCP) that allows access to IAM, Amazon Athena, Amazon S3, and AWS Cloud Trail. Store customer records as files in Amazon S3 and trains users to leverage the Amazon S3 Select feature and execute queries using the AWS CU. Enable S3 object-level logging and analyze
Cloud Trail events to audit and alarm on queries against personal data.

A

B. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS Cloud Trail. Store customer record files in Amazon S3 and train users to execute queries using the CLI via Athena Analyze Cloud Trail events to audit and alarm on queries against
personal data.

41
Q

Q127.
A company is creating an account strategy so that they can begin using AWS. The Security team will
provide each team with the permissions they need to follow the principle of least privileged access.
Teams would like to keep their resources isolated from other groups, and the Finance team would like
each team’s resource usage separated for billing purposes.
Which account creation process meets these requirements and allows for changes?

A. Create a new AWS Organizations account. Create groups in Active Directory and assign them to roles
in AWS to grant federated accE\ss. Require each team to tag their resources, and separate bills based on
tags. Control access to resources through tAM granting the minimally required privilege.

B. Create individual accounts for each team. Assign the security account as the master account. and
enable consolidated billing for all other accounts. Create a cross-account role for security to manage accounts. and send logs to a bucket in the security account.

C. Create a new AWS account, and use AWS Service Catalog to provide teams with the required
resources. Implement a third-party billing solution to provide the Finance team with the resource use for
each team based on tagging Isolate resources using IAM to avoid account sprawl. Security will control
and monitor logs and permissions.

D. Create a master account for billing using Organizations, and create each team’s account from that
master account. Create a security account for logs and cross-account access. Apply service control
policies on each account, and grant the Security team cross-account access lo all accounts. Security will
create an IAM policy for each account to maintain the least privilege access.
*

A

D. Create a master account for billing using Organizations, and create each team’s account from that master account. Create a security account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access lo all accounts. Security will
create an IAM policy for each account to maintain the least privilege access.

42
Q

Q126. A company has developed a. new billing application that will be released in two weeks. Developers are
testing the application running on· 1 0 EC2 instances managed by an Auto Scaling group in the subnet
172.31.0.0/24 within VPC A with CIDR block 172.31.0.0/16. The Developers noticed connection timeout
errors in the application logs while connecting to an Oracle database running on an Amazon EC2
an instance in the same region within VPC B with CIDR block 172.50.0.0/16. The IP of the database
instance is hardcoded in the application instances.
Which recommendations should a Solutions Architect present to the Developers to solve the problem in a
secure way with minimal maintenance and overhead?

A. Disable the SrcDestCheck attribute for all instances running the application and Oracle Database.
Change the default route of VPC A to point ENI of the Oracle Database that has an IP address assigned
within the range of 172.50.0.0/16

B. Create and attach internet gateways for both VPCs. Configure default routes to the internet gateways
for both VPCs. Assign an Elastic IP for each Amazon EC2 instance in VPCA

C. Create a VPC peering connection between the two VPCs and add a route to the routing table of VPC
A that points to the IP address range of 172.50.0.0/16

D. Create an additional Amazon EC2 instance for each VPC as a customer gateway, create one virtual
private gateway (VGW) for each VPC, configure an end-lo-end VPC and advertise the routes for
172.50.0.0/16

A

C. Create a VPC peering connection between the two VPCs and add a route to the routing table of VPC A that points to the IP address range of 172.50.0.0/16

43
Q

0125.
The company Security team requires that all data uploaded into an Amazon S3 bucket must be encrypted. The encryption keys must be highly available. and the company must be able to control access on a per-user basis, with different users having access to different encryption keys.

Which of the following architectures will meet these requirements? (Select TWO.)

A. Use Amazon S3 server-side encryption with Amazon S3-managed keys. Allow Amazon S3 to generate
an AWS/S3 master key, and use IAM to control access to the data keys that are generated.

B. Use Amazon S3 server-side encryption with AWS KMS -managed keys, create multiple customers master keys, and use key policies to control access to them.

C. Use Amazon S3 server-side encryption with customer-managed keys, and use AWS CloudHSM to
manage the keys. Use CloudHSM client software to control access to the keys that are generated.

D. Use Amazon S3 server-side encryption with customer-managed keys, and use two AWS CloudHSM
instances configured in high-availability mode to manage the keys. Use the CloudHSM client software to
control access to the keys that are generated

E. Use Amazon S3 server-side encryption with customer-managed keys, and use two AWS CloudHSM
instances configured in high-availability

A

B. Use Amazon S3 server-side encryption with AWS KMS -managed keys, create multiple customers master keys, and use key policies to control access to them.

D. Use Amazon S3 server-side encryption with customer-managed keys, and use two AWS CloudHSM
instances configured in high-availability mode to manage the keys. Use the CloudHSM client software to
control access to the keys that are generated

44
Q

Q124. A company is migrating its on-premises build artifact server to an AWS solution. The current system
consists of an Apache HTTP server that serves artifacts to clients on the local network, restricted by the
perimeter firewall. The artifact consumers are largely built automation scripts that download artifacts via
anonymous HTTP, which the company will be unable to modify within its migration timetable
The company decides to move the solution to Amazon S3 static website hosting. The artifact consumers
will be migrated to Amazon EC2 instances located within both public and private subnets in a virtual
private cloud (VPC). Which solution will permit the artifact consumers to download artifacts without modifying the existing
automation scripts?

A. Create a NAT gateway within a public subnet of the VPC. Add a default route pointing to the NAT
gateway into the route table associated with the subnets containing consumers. Configure the bucket
policy to allow the 33:ListBucket and s3 : Get object actions using the condition IP address and the
condition key aws: Source Ip matching the elastic IP address of the NAT gateway

B. Create a VPC endpoint and add ii to the route table associated with subnets containing consumers.
Configure the bucket policy to allow s3: ListBucket all s3: GetObjecl actions using the condition
StringEquals and the condition key aws :sourceVpce matching the identification of the VPC endpoint

C. Create an IAM role and instance profile for Amazon EC2 and attach- it to the instances that consume
build artifacts. Configure the bucket policy to allow the s3: Ust Bucket and s3: Get objects actions for the
principal matching the IAM role created

D. Create a VPC endpoint and add it to the route table associated with subnets containing consumers.
Configure the bucket policy to allow 83:ListBucket and 33: GetObject actions using the condition
lpAddress and the condition key aws: Sourcelp matching the VPC CIDR block.

A

B. Create a VPC endpoint and add ii to the route table associated with subnets containing consumers.
Configure the bucket policy to allow s3: ListBucket all s3: GetObjecl actions using the condition StringEquals and the condition key aws: source Vpce matching the identification of the VPC endpoint

45
Q

Q123. A Development team is deploying new APIs as serverless applications within a company. The team is
currently using the AWS Management Console to provision Amazon API Gateway, AWS Lambda, and
Amazon DynamoDB resources. A Solutions Architect has been tasked with automating the future
deployments of these serverless APIs.
How can this be accomplished?

A. Use AWS CloudForrnation with a Lambda-backed custom resource to provision API Gateway. Use the
AWS:: DynamoDB:: table and AWS:: Lambda :: Function resources to create the Amazon DynamoDB
table and Lambda functions. Write a script to automate the deployment of the CloudFormation template

B. Use the AWS Serverless Application Model to define the resources. Upload a YAML template and
application files to the code repository. Use AWS CodePipeline to connect to the code repository and to
create an action to build using 11:ws CodeBuild. Use the AWS CloudFormation deployment provider in
CodePipeline to deploy the solution.

C . Use AWS CloudFormation to define the serverless application. Implement versioning on the Lambda
functions and create aliases to point to the versions. When deploying, configure weights to implement
shifting traffic to the newest version, and gradually update the weights as traffic moves over.

D. Commit the application code to the AWS CodeCommit code repository. Use AWS CodePipeline and
connect to the CodeCommit code repository. Use AWS CodeBuild to build and deploy the Lambda
functions using AWS CodeDeploy. Specify the deployment preference type in CodeDeploy to gradually
shift traffic over to the new version

A

B. Use the AWS Serverless Application Model to define the resources. Upload a YAML template and
application files to the code repository. Use AWS CodePipeline to connect to the code repository and to
create an action to build using 11:ws CodeBuild. Use the AWS CloudFormation deployment provider in
CodePipeline to deploy the solution.

46
Q
  1. A company is implementing a multi-account strategy; however. the Management team has expressed concerns that services like DNS may become overly complex. The company needs a solution that allows private DNS to be shared among virtual private clouds (VPCs) in different accounts. The company will have approximately 50 accounts in total.
    What solution would create the LEAST complex DNS architecture and ensure that each VPC can resolve
    all AWS resources?

A. Create a shared services VPC in a central account and create a VPC peering connection from the shared services VPC to each of the VPCs in the other accounts. Within Amazon Route 53, create a privately hosted zone in the shared services VPC and resource record sets for the domain and
subdomains. Programmatically associate other VPCs with the hosted zone

B. Create a VPC peering connection among the VPCs in all accounts. Set the VPC attributes
to enable DNS hostnames and enable Dns Support to “true· for each VPC. Create an Amazon Route 53
private zone for each VPC. Create resource record sets for the domain and subdomains.
Programmatically associate the hosted zones in each VPG with the other VPCs.

C. Create a shared services VPC in a central account. Create a VPC peering connection from the VPCs

in other accounts to the shared services VPC. Create an Amazon Route 53 privately hosted zone in the
shared services VPC with resource record sets for the domain and subdomains. Allow UDP and TCP port
53 over the VPC peering connections

D. Set the VPC attributes to enable DNS hostnames and enable Support to “false· in every VPC. Create
an AWS Direct Connect connection with a private virtual interface. Allow UDP and TCP port 53 over the
virtual interface. Use the on-premises DNS servers to resolve the IP addresses in each VPC on AWS.

A

A. Create a shared services VPC in a central account and create a VPC peering connection from the
shared services VPC to each of the VPCs in the other accounts. Within Amazon Route 53, create a
privately hosted zone in the shared services VPC and resource record sets for the domain and
subdomains. Programmatically associate other VPCs with the hosted zone

47
Q

Q121, A Solutions Architect is working with a company that is extremely sensitive to its IT costs and wishes to
implement controls that will result in a predictable AWS spend each month. Which combination of steps can help the company control and monitor its monthly AWS usage to achieve
a cost that is as close as possible to the target amount? (Select THREE.)

A. Implement an IAM policy that requires users to specify a ‘workload’ tag for cost allocation when
launching Amazon EC2 instances,

B. Contact AWS Support and ask that they apply limits to the account so that users are not able to launch
more than a certain number of instance types

C. Purchase all upfront Reserved Instances that cover 100% of the account’s expected Amazon EC2
usage.

D. Place conditions in the users· IAM policies that limit the number of instances they are able to launch.
E. Define ·workload’ as a cost allocation tag in the AWS Billing and Cost Management console.

F. Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.

A

A. Implement an IAM policy that requires users to specify a ‘workload’ tag for cost allocation when
launching Amazon EC2 instances,

C. Purchase all upfront Reserved Instances that cover 100% of the account’s expected Amazon EC2 usage

F. Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.

48
Q

Q120 A company ingests and processes streaming market data. The data rate is constant. A nightly process
that calculates aggregate statistics is run, and each execution takes about 4 hours to complete. The statistical analysis is not mission-critical to the business, and previous data points are picked up on the next execution if a particular run fails.

The current architecture uses a pool of Amazon EC2 Reserved Instances with 1-year reservations running full time to ingest and store the streaming data in attached Amazon EBS volumes.

On-Demand EC2 instances are launched each night to perform the nightly processing, accessing the stored data from
NFS shares on the ingestion servers, and terminating the nightly processing servers when complete. The
Reserved Instance reservations are expiring, and the company needs to determine whether to purchase
new reservations or implied.ent a new design

Which is the most cost-effective design?

A. Update the ingestion process to use Amazon Kinesis DataFirehose to save data to Amazon S3. Use a
a fleet of On-Demand EC2 instances that launches each night to perform the batch processing of the S3
data and terminates when the processing completes.

B. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use
AWS Batch to perform nightly processing with a Spot market bid of 50% of the On-Demand price.

C. Update the ingestion process to use a fleet of EC2 Reserved Instances behind a Network Load Balancer with 3-year leases. Use Batch with Spot Instances with a maximum bid of 50% of the OnDemand price for the nightly processing

D. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon Redshift.
Use an AWS Lambda function scheduled to run nightly with Amazon CloudWatch Events to query Amazon Redshift to generate the daily statistics

A

B. Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use AWS Batch to perform nightly processing with a Spot market bid of 50% of the On-Demand price.

49
Q

Q119. A company has a serverless application comprised of Amazon CloudFront, Amazon API Gateway, and
AWS Lambda functions. To the current deployment process of the application, code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The
company would like to decrease the time to deploy new versions of the application logic provided by the
Lambda functions. and also reduce the time to detect and revert when errors are identified. How can this be accomplished?

A. Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWSCloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda. create an AWS CloudFormation changeset and deploy. if errors are triggered,
revert the AWS CloudFormation changeset to the previous version.

B. Use AWS SAM and built-in AWS Code Deploy to deploy the new Lambda version, gradually shift traffic
to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon
CloudWatch alarms are triggered

C. Refactor the AWS CLI scrip(s into a single script that deploys the new Lambda version. When the deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version.

D. Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that
references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint,
monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint.

A

A. Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda. create an AWS CloudFormation changeset and deploy. if errors are triggered, revert the AWS CloudFormation changeset to the previous version.

50
Q

0118.
A company has released a new version of a website to target an audience in Asia and South America. The website’s media assets are hosted on Amazon S3 and have an Amazon CloudFront distribution to improve end-user performance. However, users are having a poor login experience because of the authentication service is only available In us -east-1 AWS Region

How can the Solutions Architect· improve the login experience and maintain high security and performance with minimal management overhead?

A. Replicate the set up in each new geography and use Amazon Route 53 geo-based routing to route
traffic to the AWS Region closest to the users.

B. Use an Amazon Route 53 weighted routing policy to route traffic to the CloudFront distribution. Use
CloudFront cached HTTP methods to improve the user login experience.

C. Use Amazon Lambda@Edge attached to the CloudFront viewer request trigger to authenticate and authorize users by maintaining a secure cookie token with a session expiry to improve the user experience in multiple geographies.

D. Replicate the setup in each geography and use Network Load Balancers to route traffic to the
authentication service running in the closest region to users.

A

C. Use Amazon Lambda@Edge atta¢hed to the CloudFront viewer request trigger to authenticate and authorize users by maintaining a secure cookie token with a session expiry to improve the user experience in multiple geographies.

51
Q

Q117 A company currently uses Amazon EBS and Amazon RDS for storage purposes. The company intends to use a pilot light approach for disaster recovery in a different AWS Region. The company has an RTO of 6 hours and an RPO of 24 hours.
Which solution would achieve the requirements with MINIMAL cost?

A. Use AWS Lambda to create daily EBS and RDS snapshots, and copy them to the disaster recovery region. Use Amazon Route 53 with active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery region.

B. Use AWS Lambda to create daily EBS and RDS snapshots, and copy them to the disaster recovery
region. Use Amazon Route 53 with active-active failover configuration. Use Amazon EC2 in an Auto
Scaling group configured in the same way as in the primary region.

C. Use Amazon ECS to handle long-running tasks to create daily EBS and RDS snapshots, and copy to
the disaster recovery region. Use Amazon Route 53 with active-passive failover configuration. Use
Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery region.

D. Use EBS and RDS cross-region snapshot copy capability to create snapshots in the disaster recovery
region. Use Amazon Route 53 with active-active failover configuration. Use Amazon EC2 in an Auto
Scaling group with the capacity set to 0 in the disaster recovery region.

A

A. Use AWS Lambda to create daily EBS and RDS snapshots, and copy them to the disaster recovery region. Use Amazon Route 53 with active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery region.

52
Q

Q116. A company prefers to limit running Amazon EC2 instances to those that were launched from AMls pre•
approved by the Information Security department. The Development team has an agile continuous integration and deployment’ process that cannot be stalled by the solution.

Which method enforces the required controls with the LEAST impact on the development process?
(Select TWO.)

A. Use IAM policies to restrict the ability of users or other automated entities to launch EC2 instances based on a specific set of pre-approved AM ls, such as those tagged in a specific way by information security

B. Use regular scans within Amazon Inspector with a custom assessment template to determine if then EC2 instance that the Amazon Inspector Agent is running on is based upon a pre-approved AMI. If it is not, shut down the instance and inform Information Security by email that this occurred.

C. Only allow launching of EC2 instances using a centralized DevOps team, which is given work packages via notifications from an internal ticketing system. Users make requests for resources using this ticketing tool. which has manual information security approval steps to ensure that EC2 instances are
only launched from approved AMIs.

D. Use AWS Config rules to spot any launches of EC2 instances based on nonapproved AMIS, trigger an AWS Lambda function to automatically terminate the instance, and publish a message to an Amazon SNS topic to inform Information Security that this occurred.

E. Use a scheduled AWS Lambda function to scan through the list of·ruhning instances within the virtual
private cloud (VPC) and determine if any of these are based on unapproved AMIS. Publish a message to an SNS topic to inform Information Security that this occurred and then shut down the instance.
A

A. Use IAM policies to restrict the ability of users or other automated entities to launch EC2 instances based on a specific set of pre-approved AM ls, such as those tagged in a specific way by information security

D. Use AWS Config rules to spot any launches of EC2 instances based on nonapproved AMIS, trigger an AWS Lambda function to automatically terminate the instance, and publish a message to an Amazon SNS topic to inform Information Security that this occurred.

53
Q

Q115. A company that provides wireless services needs a solution to store and analyze log files about user
activities. Currently, log files are delivered daily to Amazon Linux on an Amazon EC2 instance. A batch
script is run once a day to aggregate data used for analysis by a third-party tool. The data pushed to the
third-party tool is used to generate a visualization for end users. The batch script is cumbersome to
maintain, and it takes several hours to deliver the ever-increasing data volumes to the third-party tool.
The company wants to lower costs and is open to considering a new tool that minimizes development
effort and lowers administrative, overhead. The company wants to build a more agile solution that can
store and perform the analysis in near-real-time. with minimal overhead. The solution needs to be cost-effective and scalable to meet the company’s end-user base growth.
Which solution meets the company’s requirements?

A. Develop a Python script to capture the data from Amazon EC2 in real-time and store the data in
Amazon S3. Use a copy command to copy data from Amazon S3 to Amazon Redshift Connect a
business intelligence tool running on Amazon EC2 to Amazon Redshift and create the visualizations.

B. Use an Amazon Kinesis agent running on an EC2 instance in an Auto Scaling group to collect and
send the data to an Amazon Kinesis Data Firehose delivery stream. The Kinesis Data Firehose delivery
stream will deliver the data directly to Amazon ES. Use Kibana to visualize the data.

C. Use an in-memory caching application running on an Amazon EBS-optimized EC2 instance to capture
the log data in near real-time. Install an Amazon ES cluster on the same EC2 instance to store the log
files as they are delivered to Amazon EC2 in near real-time. Install a Kibana plugin to create the
visualizations.

D. Use an Amazon Kinesis agent running on an EC2 instance to collect and send the data to an Amazon
Kinesis Data Firehose delivery stream. The Kinesis Data Firehose delivery stream will deliver the data to
Amazon S3. Use an AWS Lambda function to deliver the data from Amazon S3 to Amazon ES. Use Kibana to visualize the data.

A

D. Use an Amazon Kinesis agent running on an EC2 instance to collect and send the data to an Amazon Kinesis Data Firehose delivery stream. The Kinesis Data Firehose delivery stream will deliver the data to Amazon S3. Use an AWS Lambda function to deliver the data from Amazon S3 to Amazon ES. Use Kibana to visualize the data.

54
Q

Q114. A company has implemented AWS Organizations. It has recently set up a number of new accounts and wants to deny access to a specific set of AWS services in these new accounts.
How can this be controlled MOST efficiently?

A. Create an IAM policy In each account that denies access to the services Associate the policy with an IAM group, and add all IAM users to the group.

B. Create a service control policy that denies access to the services. Add all of the new accounts to a single organizational unit (OU), and apply the policy to that OU.

C. Create an IAM policy in each account that denies access -to the services Associate the policy with an IAM role, and instruct users to log in using their corporate credentials and assume the IAM role.

D. Create a service control policy that denies access to the services, and apply the policy to the root of
the organization.

A

B. Create a service control policy that denies access to the services. Add all of the new accounts to a
a single organizational unit (OU), and apply the policy to that OU.

55
Q

Q113. A Solutions Architect is responsible for redesigning a legacy Java application to improve its availability,
data durability, and scalability.

Currently, the application runs on a single high-memory Amazon EC2 instance.

It accepts HTTP requests from upstream clients, adds them to an in-memory queue. and responds with a 200 status.

A separate application thread reads items from the queue, processes them, and persists the results to an Amazon RDS MySQL instance.

The processing time for each item takes 90 seconds on average, most of which is spent waiting on external service calls, but the application is written to process multiple items in parallel.

Traffic to this service is unpredictable. During periods of high load, items may sit in the internal queue for over an hour while the application processes the backlog. In addition, the current system has issues with availability and data loss if the single application node fails.

Clients that access this service cannot be modified. They expect to receive a response to each HTTP request they send within 1 O seconds before they will time out and ret(Y the request.

Which approach would improve the availability and durability of the system while decreasing the processing latency and minimizing costs?

A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to pass requests to an AWS Lambda function. Migrate the core processing code to a Lambda function and write a wrapper class that provides a handler method that converts the proxy events to the internal application data model and invokes the processing module

B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SOS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of
messages in the Amazon SOS queue.

C. Modify the application to use Amazon DynamoDB instead of Amazon RDS Configure Auto Scaling for
the DynamoDB table. Deploy the application within an Auto Scaling group with a scaling policy based on
CPU utilization. Back the in-memory queue with a memory-mapped file to an instance store volume and
periodically write that file to Amazon S3.

D. Update the application to use a Redis task queue instead of the in-memory queue. Build a Docker
container image for the application. Create an Amazon ECS task definition that includes the application
container and a separate container to host Redis. Deploy the new task definition as an ECS service using
AWS Fargate and enable Auto Scaling

A

B. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SOS queue. Extract the core processing code from the existing application and update it to pull items from Amazon SQS instead of an in-memory queue. Deploy the new processing application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of messages in the Amazon SOS queue.

56
Q

0112.

A company has a 24 TB MySQL database in its on-premises data center that grows at the rate of 10 GB per day. The data center is connected to the company’s AWS infrastructure with a 50 Mbps VPN connection.

The company is migrating the application and workload to AWS. The application code is already installed and tested on Amazon EC2. The company now needs to migrate the database and wants to go live on AWS within 3 weeks.

Which of the following approaches meets the schedule with LEAST downtime?

  1. Use the VM Import/Export service to import a snapshot of the on-premises database into AWS.
  2. Launch a new EC2 instance from the snapshot.
  3. Set up ongoing database replication from on-premises lo the EC2 database over the VPN
  4. Change the DNS entry to point to the EC2 database.
  5. Stop the replication.

B.

  1. Launch an AWS DMS instance.
  2. Launch an Amazon RDS Aurora MySQL DB instance.
  3. Configure the AWS OMS instance with on-premises and Amazon RDS MySQL database information.
  4. Start the replication task within AWS OMS over the VPN.
  5. Change the DNS entry to point to the Amazon RDS MySQL database.
  6. Stop the replication.

C.
1. Create a database export locally using database-native tools.
2. Import that into AWS using AWS Snowball.
3. Launch an Amazon RDS Aurora DB instance,
4. Load the data in the RDS Aurora DB instance from the export.
5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the
VPN.
6. Change the DNS entry to point to the RDS Aurora DB instance,
7. Stop the replication

D.
1. Take the on-premises application offline.
2. Create a database export locally using database-native tools
3. Import that into AWS using AWS Snowball.
4. Launch an Amazon RDS Aurora DB instance
5. Load the data in the RDS Aurora DB instance from the export.
6 Change the DNS entry to point to the Amazon RDS Aurora DB instance

C.

  1. Create a database exp or t locally using database-native tools.
  2. Import that into AWS using AWS Snowball.
  3. Launch an Amazon RDS Aurora DB instance,
  4. Load the data in the RDS Aurora DB instance from the export.
  5. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.
  6. Change the DNS entry to point to the RDS Aurora DB instance,
  7. Stop the replication

D

  1. Take the on-premises application offline.
  2. Create a database export locally using database-native tools
  3. Import that into AWS using AWS Snowball.
  4. Launch an Amazon RDS Aurora DB instance
  5. Load the data in the RDS Aurora DB instance from the export.
  6. Change the ONS entry to point to the Amazon RDS Aurora DB instance
  7. Put the Amazon EC2 hosted application online
A

C. 1. Create a database export locally using database-native tools.

  1. Import that into AWS using AWS Snowball.
  2. Launch an Amazon RDS Aurora DB instance,
  3. Load the data in the RDS Aurora DB instance from the export.
  4. Set up database replication from the on-premises database to the RDS Aurora DB instance over the VPN.
  5. Change the DNS entry to point to the RDS Aurora DB instance,
  6. Stop the replication

*FOCUS ON 7 Steps and Stop the Replication as Last Step*

57
Q

Q111 A company has an application that runs a web service on Amazon EC2 instances and stores.Jpg images in Amazon S3. The web traffic has a predictable baseline, but often demand spikes unpredictably for short periods of time. The application is loosely coupled and stateless. The.jpg images stored in Amazon S3 are accessed frequently for the first 15 to 20 days, they are seldom accessed thereafter but always need to be immediately available. The CIO has asked to find ways to reduce costs.

Which of the following options will reduce costs? (Select TWO.)

A. Purchase Reserved instances for baseline capacity requirements and use On-Demand instances for
the demand spikes.

B. Configure a lifecycle policy to move the .jpg images on Amazon S3 to S3 IA after 30 days.

C. Use On-Demand instances for baseline capacity requirements and use Spot Fleet instances for the
demand spikes.

D. Configure a lifecycle policy to move the.jpg images on Amazon S3 to Amazon Glacier after 30 days.

E. Create a script that checks t11e load on all web servers and terminates unnecessary On-Demand instances.

A

A. Purchase Reserved instances for baseline capacity requirements and use On-Demand instances for
the demand spikes.

B. Configure a lifecycle policy to move the .jpg images on Amazon S3 to S3 IA after 30 days.

58
Q

Q110.A large global company wants to migrate a stateless mission-critical application to AWS. The application
is based. on IBM WebSphere (application and integration middleware), IBM MO (messaging middleware).
and IBM DB2 (database software) on a z/OS operating system.
How should the Solutions Architect migrate the application to AWS?

A. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.
Re-platform the IBM MO to an Amazon E C 2 -based MO. Re-platform the z/OS-based DB2 to Amazon
RDS DB2.

B. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling, Re-platform the IBM MO to an Amazon MO. Re-platform z/OS-based DB2 to Amazon EC2 based DB2.

C. Orchestrate and deploy the application by using AWS Elastic Beanstalk. Re-platform the IBM MO to Amazon SQS. Re-platform z/OS-based DB2 to Amazon RDS DB2

D. Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon
EC2-based solution. Re-platform the IBM MO to an Amazon MO.

A

B. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling, Re-platform the IBM MO to an Amazon MO. Re-platform z/OS-based DB2 to Amazon EC2 based DB2.

59
Q

Q109. A group of research institutions and hospitals are in a partnership to study 2 PBs of genomic data. The
institute that owns the data stores it in an Amazon S3 bucket and updates it regularly.

The institute would like to give all of the organizations in the partnership read access to the data. All members of the
partnership are extremely cost-conscious. and the institute that owns the account with the S3 bucket is concerned about covering the costs for requests and data transfers from Amazon S3.

Which solution allows for secure data sharing without causing the institute that owns the bucket to assume all the costs for S3 requests and data transfers?

A. Ensure that all organizations in the partnership have AWS accounts. In the account with the S3 bucket,
create a cross-account role for each account in the partnership that allows read access to the data. Have the organizations assume and use that read role when accessing the data.

B. Ensure that all organizations in the partnership have AWS accounts. Create a bucket policy on the
bucket that owns the data. The policy should allow the accounts in the partnership to read access to the
bucket. Enable Requester Pays on the bucket. Have the organizations use their AWS credentials when
accessing the data

C. Ensure that all organizations in the partnership have AWS accounts Configure buckets in each of the
accounts with a bucket policy so that allows the institute that owns the data the ability to write to the bucket.
Periodically sync the data from the institute’s account to the other organizations. Have the organizations
use their AWS credentials when accessing the data using their accounts

D. Ensure that all organizations in the partnership have AWS accounts. In the account with the S3 bucket.
Create a cross-account role for each account in the partnership that allows read access to the data. Enable Requester Pays on the bucket. Have the organizations assume and use that read role when accessing the data.

A

D. Ensure that all organizations in the partnership have AWS accounts. In the account with the S3 bucket. Create a cross-account role for each account in the partnership that allows read access to the data Enable Requester Pays on the bucket. Have the organizations assume and use that read role when accessing the data.

60
Q

Q108. A company is finalizing the architecture for its backup solution for applications running on AWS. All of the
applications run on AWS and use at least two Availability Zones in each tier.

Company policy requires IT to durably store nightly backups of all its data in at least two locations production and disaster recovery. The locations must be in different geographic regions. The company also needs the backup to be available to restore immediately at the production data center and within 24 hours at the disaster recovery location.

All backup processes must be fully automated
What is the MOST cost-effective backup solution that will meet all the requirements?

A. Back up all the data to a large Amazon EBS volume attached to the backup media server in the production region. Run automated scripts to snapshot these volumes nightly, and copy these snapshots to the disaster recovery region.

B. Back up all the data to Amazon S3 in the disaster recovery region. Use a lifecycle policy to move this
data to Amazon Glacier in the production region immediately. Only the data is replicated; remove the data
from the S3 bucket in the disaster recovery region.

C. Back up all the data to Amazon Glacier in the production region. Set up cross-region replication of this
data to Amazon Glacier in the disaster recovery region. Set up a lifecycle policy to delete any data older
than 60 days

D. Back up all the data to Amazon S3 in the production region. Set up cross-region replication of this S3 bucket to another region and set up a lifecycle policy in the second region to immediately move this data
to Amazon Glacier.

A

D. Back up all the data to Amazon S3 in the production region. Set up cross-region replication of this S3
bucket to another region and set up a lifecycle policy in the second region to immediately move this data
to Amazon Glacier.

61
Q

Q107. A company is planning the migration of several lab environments used for software testing. An assortment of custom tooling is used to manage the test runs for each lab. The labs use immutable infrastructure for the software test runs, and the results are stored in a highly available SQL database cluster. Although completely rewriting the custom tooling is out of scope for the migration project, the
company would like to optimize workloads during the. migration.

Which application migration strategy meets this requirement?

A. Re-host
B. Re-platform
C. Re-factor/re-architect
D. Retire

A

A. Re-host

62
Q

0106.

A Solutions Architect needs to migrate a legacy application from on-premises to AWS. On-premises, the
application runs on two Linux servers behind a load balancer and accesses a database that is master-master on two servers.

Each application server requires a license file that is tied to the MAC address of the server’s network adapter.

It akes the software vendor 12 hours to send new license files through email. The application requires configuration files to use static 1Pv4 addresses to access the database
servers, not DNS.

Given these requirements, which steps should be taken together to enable a scalable architecture for the
application servers? (Select TWO.)

A. Create a pool of ENls, request license files from the vendor for the pool. and store Ille license files within Amazon S3. Create automation to download an unused license file, and attach the corresponding ENI at boot time.

B. Create a pool of ENls, request license files from the vendor for the pool. store Ille license files on an Amazon EC2 instance, modify the configuration files and create an AMI from the instance. Use this AMI for all instances.

C. Create bootstrap automation to request a new license file from the vendor with a unique return email. Have the server configure itself with the received license file.

D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store, and inject those parameters into the local configuration files. Keep SSM up to date using a’ Lambda function.

E. Install the application on an EC2 instance, configure the application, and configure the IP address information. Create an AMI from this instance and use it for all instances.

A

A. Create a pool of ENls, request license files from the vendor for the pool. and store Ille license files within Amazon S3. Create automation to download an unused license file, and attach the corresponding ENI at boot time.

D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store, and inject those parameters into the local configuration files. Keep SSM up to date using a’ Lambda function.

63
Q

Q105 A company runs a public-facing application that uses a Java-based web service via a RESTful API. It is
hosted on Apache Tomcat on a single server in a data center that runs consistently at 30% CPU
utilization.

The use of the API is expected to increase by 10 times with a new product launch. The business wants to migrate the application to AWS with no disruption and needs it to scale to meet demand.

The company has already decided to use Amazon Route 53 and CNAME records to redirect traffic.

How can these requirements be met with the LEAST amount of effort?

A. Use AWS Elastic Beanstalk to deploy the Java web service and enable Auto Scaling. Then switch the application to use the new web service.

B. Lift and shift the Apache server to the cloud using AWS SMS. Then switch the application to direct web service traffic to the new instance.

C. Create a Docker image and migrate the image to Amazon ECS. Then change the application code to direct web service queries to the ECS container.

D. Modify the application to call the web service via Amazon API Gateway. Then create a new AWS Lambda Java functions to run the Java web service code. After testing. change API Gateway to use the Lambda function.

A

A. Use AWS Elastic Beanstalk to deploy the Java web service and enable Auto Scaling. Then switch the application to use the new web service.

64
Q

Q104. A company has developed a web application that runs on Amazon EC2 instances in one AWS Region.

The company has taken on new business in other countries and must deploy its application into other
regions to meet low -latency requirements for its users.

The regions can be segregated, and
an application running in one region does not need to communicate with instances in other regions

How should the company’s Solutions Architect automate the deployment of the application so that it can be MOST efficiently deployed into multiple regions?

A. Write a bash script that uses the AWS CU to query the current state in one region and output a JSON representation. Pass the JSON representation to the AWS cu. specifying the region parameter to deploy the application to other regions

B. Write a bash script that uses the AWS CU to query the current state in one region and output an AWS Cloud Formation template. Create a Cloud Formation stack from the template by using the AWS CU, specifying the region parameter to deploy the application to other regions

C. Write a CloudFormation template describing the application”s infrastructure in the resources section. Create a CloudFormation stack from the template by using the AWS CU, specify multiple regions using the region parameter to deploy the application.

D. Write a CloudFormation template describing the application’s infrastructure in the Resources section. Use a CloudFormation stack set from an administrator account to launch stack instances that deploy the application to other regions.

A

C. Write a CloudFormation template describing the application”s infrastructure in the resources section.
Create a CloudFormation stack from the template by using the AWS CU, specify multiple regions using
the regions parameter to deploy the application.

65
Q

Q103. An online retailer needs to regularly process large product catalogs< which are handled in batches. These
are sent out to be processed by people using the Amazon Mechanical Turk service, but the retailer has
asked its Solutions Architect to design a workflow orchestration system that allows it to handle multiple
concurrent Mechanical Turk operations, deal with the result assessment process and reprocess failures.
Which of the following options gives the retailer the ability to interrogate the state of every workflow with
the LEAST amount of implementation effort?
A. Trigger Amazon CloudWatch alarms based upon message visibility in multiple Amazon SQS queues
(one queue per workftow stage) and send messages via Amazon SNS to trigger AWS Lambda functions
to process the next step. Use Amazon ES and Kibana to visualize Lambda processing logs to see the
workflow states
B. Hold workflow information in an Amazon RDS instance with AWS Lambda functions polling RDS for
status changes. Worker Lambda functions then process the nex1 workflow steps. Amazon QuickSight will
visualize workflow states directly out of Amazon RDS.
C. Build the workflow in AWS Step Functions, using ii to orchestrate multiple concurrent workflows. The status of each workflow can be visualized in the AWS Management Console, and historical data can be
written to Amazon S3 and visualized using Amazon QuickSight.
D. Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through Mechanical Turk. Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states.

A

D. Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through Mechanical Turk. Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states.

66
Q

Q102. A media storage application uploads user photos to Amazon S3 for processing. End users are reporting
that some uploaded photos are not being processed properly. The Application Developers trace the logs
and find that AWS Lambda is experiencing execution issues when thousands of users a.re on the system
simultaneously. Issues are caused by:
• Limits around concurrent executions.
• The performance of Amazon DynamoDB when saving data.
Which actions can be taken to increase the performance and reliability of the application? (Select TWO.)
A. Evaluate and adjust the read capacity units (RCUS) for the DynamoDB tables
B. Evaluate and adjust the write capacity units (WCUS) for the DynamoDB tables
C. Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
D. Configure a dead letter queue that will reprocess failed or timed-out Lambda functions
E. Use S3 Transfer Acceleration to provide lower-latency access to end users.

A

B. Evaluate and adjust the write capacity units (WCUS) for the DynamoDB tables

D. Configure a dead letter queue that will reprocess failed or timed-out Lambda functions

67
Q

Q101. A large company has increased its utilization of AWS overtime in an unmanaged way. As such, they
have a large number of independent AWS accounts across different( business units, projects, and
environments. The company has created a Cloud Center of Excellence team. which is responsible for
managing all aspects of the AWS Cloud, including their AWS accounts.
Which of the following should the Cloud Center of Excellence team do to BEST address their
requirements in a centralized way? (Select TWO.)
A. Control all AWS account root user credentials. Assign AWS IAM users in the account of each user who
needs to access AWS resources. Follow the policy of least privilege in assigning permissions to each
user.
B. Tag all AWS resources with details about the business unit, project. and the environment. Send all AWS
Cost and Usage reports to a central Amazon S3 bucket, and use tools such as Amazon Athena and
Amazon OuickSight to collect billing details by the business unit.
C. Use the AWS Marketplace to choose and deploy a Cost Management tool. Tag all AWS resources with details about the business unit, project, and environment. Send all AWS Cost and Usage reports for the
AWS accounts lo this tool for analysis.
D. Set up AWS Organizations. Enable consolidated billing, and link all existing AWS accounts to a master
billing account. Tag all AWS resources with details about the business unit. the project, and the environment.
Analyze Cost and Usage reports using tools such as Amazon Athena and Amazon Quick Sight, to collect
billing details by the business unit.
E. Using a master AWS account, create IAM users within the master account. Define IAM roles in the
other AWS accounts, which cover each of the required functions in the account. Follow the policy of least
privilege in assigning permissions to each role, then enable the IAM• users to assume the roles that they
need to use.

A

D. Set up AWS Organizations. Enable consolidated billing, and link all existing AWS accounts to a master
billing account. Tag all AWS resources with details about the business unit. the project, and the environment.
Analyze Cost and Usage reports using tools such as Amazon Athena and Amazon Quick Sight, to collect
billing details by the business unit.
E. Using a master AWS account, create IAM users within the master account. Define IAM roles in the
other AWS accounts, which cover each of the required functions in the account. Follow the policy of least
privilege in assigning permissions to each role, then enable the IAM• users to assume the roles that they
need to use.

68
Q

Q100. A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2
instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS
MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the
product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health
check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during
the outage. A subsequent investigation determined that the webserver metrics were within the normal
range, but the database.tier was experiencing high load, resulting in severely elevated query response
times.
Which of the following g changes together would remediate these issues while improving monitoring
capabilities for the availability and functionality of the entire application stack for future growth? (Select
TWO.)
A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web
application to reduce the load on the backend database tier.
B. Configure the target group health check to point at a simple HTML page instead of a product catalog
page and the Amazon Route 53 health check against the product page to evaluate full application
functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
C. Configure the target group health check to use a TCP check of the Amazon EC2 web server and the
Amazon Route 53 health check against the product page to evaluate full application functionality.
Configure Amazon CloudWatch alarms to notify administrators when the site fails.
D. Configure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load,
impaired RDS instance in the database tier.
E. Configure an Amazon ElastiCache cluster and place it between

A

A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.

C. Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails.

69
Q

Q99. A company is designing a new highly available web application on AWS. The application requires
consistent and reliable connectivity from the application servers in AWS to a backend REST API hosted
in the company’s on-premis13s environment The backend connection between AWS and on-premises will
be routed over an AWS Direct Connect connection through private virtual interlace. Amazon Route 53
will be used to manage private DNS records for the application to resolve the IP address on the backend
REST API.
Which design would provide a reliable connection to the backend API?
A. Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks
to monitor the availability of each backend endpoint and perform DNS-level failover.
B. Install a second Direct Connect connection from a different network carrier and attach it to the same
virtual private gateway as the first Direct Connect connection.
C. Install a second cross-connect for the same Direct Connect connection from the same network carrier,
and join both connections to the same link aggregation group (LAG) on the same private virtual interface.
D. Create an IPSec VPN connection routed over the public internet from the on-premises data center to
AWS and attach it to the same virtual private gateway as the Direct Connect connection

A

B. Install a second Direct Connect connection from a different network carrier and attach it to the same
virtual private gateway as the first Direct Connect connection.

70
Q

Q 98. A company is using an Amazon CloudFront distribution to distribute both static and dynamic content from
a web application running behind an Application Load Balancer. The web application requires user
authorization and session tracking for dynamic content. The CloudFront distribution has a single cache
behavior configured to forward the AulhoriZalion, Host, and User-Agent HTTP whitelist headers and a
session cookie to the origin. All other cache behavior settings are set to their default value.
A valid ACM certificate is applied to the Cloud Front distribution with a matching CNAME in the distribution
settings. The ACM certificate is. also applied to the HTTPS listener for the Application Load Balancer. The
CloudFront origin protocol policy is set to HTTPS only. Analysis of the cache statistics report shows that
the miss rate for this distribution is very high. SSL/TLS handshake between CloudFront and the Application Load Balancer to fail?
A. Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP
headers from the whitelist headers section on both of the cache behaviors. Remove the session cookie
from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section
for cache behavior configured for static content.
B. Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the
cache behavior. Then update the cache behavior to use pre-signed cookies for authorization.
C. Remove the Host HTTP header from the whitelist headers section and remove the session cookie from
the whitelist cookies section for the default cache behavior. Enable automatic object compression and
use Lambda@Edge viewer request events for user authorization.
D. Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header
from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the
whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache
behavior configured for static content.

A

C. Remove the Host HTTP header from the whitelist headers section and remove the session cookie from
the whitelist cookies section for the default cache behavior. Enable automatic object compression and
use Lambda@Edge viewer request events for user authorization.

71
Q

Q97. A Solutions Architect is designing the storage layer for a recently purchased application. The application
will be running on Amazon EC2 instances and has the following layers and requirements:
• Data layer: A POSIX file system shared across many systems.
• Service layer: Static file content that requires block storage with more than 1 OOK IOPS.
Which combination of AWS services will meet these needs? (Select TWO.)
A. Data layer -Amazon S3
B. Data layer -Amazon EC2 Ephemeral Storage
C. Data layer - Amazon EFS
D. Service layer -Amazon EBS volumes with Provisioned IOPS
E. Service layer - Amazon EC2 Ephemera.I Storage

A

C. Data layer - Amazon EFS
D. Service layer -Amazon EBS volumes with Provisioned IOPS

72
Q

Q96.A Solutions Architect is designing a network solution for a company that has applications running in a
data center in Northern Virginia. The applications in the company”s data center require predictable
performance to applications running in a virtual private cloud (VPC) located in us-east-1 and a secondary
VPC in us- west-2 within the same account. The company data center is collocated in an AWS Direct
Connect facility that serves the us-east-1 region. The company has already ordered an AWS Direct
Connect connection and a cross-connect has been established.
Which solution will meet the requirements at the LOWEST cost?

A. Provision a Direct Connect gateway and attach the virtual private gateway (VGW) for the VPC in us•
east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and
associate it to the Direct Connect gateway
B. Create private VIFs on the Direct Connect connection for each of the company’s VPCs in the us-east-1
and us-west-2 regions. Configure the company’s data center router to connect directly with the VPCs in
those regions via the private VIFs.
C. Deploy a transit VPC solution using Amazon EC2•based router instances in the us-east-1 region.
Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in
the us-east-1 and us-west-2 regions, which are attached to the company’s VPCs in those regions.
Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public
VIF between the transit routers and the company’s data center router
D. Order a second Direct Connect connection to a Direct Connect facility with connectivity to the us-west-
2 region. Work with a partner t o establish a network extension link over dark fiber from the Direct Connect
facility to the company”s data center. Establish private VIFs on the Direct Connect connections for each of
the company’s VPCs in the respective regions. Configure the company’s data center router to connect
directly with the VPCs in those regions via the private VIFs.

A

A. Provision a Direct Connect gateway and attach the virtual private gateway (VGW) for the VPC in us•
east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and
associate it to the Direct Connect gateway

73
Q

Q95.A company has an existing on-premises three-tier web application. The Linux web servers serve content
from a centralized file share on a NAS server because ! the content is refreshed several times a day from
various sources. The existing infrastructure is not optimized, and the company would like to move to AWS
in order to gain the ability to scale resources u p and down in response to load. On-premises and AWS
resources are connected using AWS Direct Connect.
How can the company migrate the web infrastructure to AWS without delaying the content refresh
process?
A . Create a cluster of web server Amazon EC2 instances behind a Classic Load Balancer on AWS. Share
an Amazon EBS volume among all instances for the content. Schedule a period ic synchronization of this
volume and the NAS server.
B. Create an on-premises file gateway using AWS Storage Gateway to replace the NAS server and
replicate content to AWS. On the AWS side, mount the same Storage Gateway bucket to each web
server Amazon EC2 instance to serve the content.
C. Expose an Amazon EFS share to on-premises users to serve as the NAS server. Mount the same EFS
Share to the webserver Amazon EC2 instances to serve the content
D. Create web server Amazon EC2 instances on AWS in an Auto Scaling group Configure a nighty
a process where the webserver instances are updated from the NAS server.

A

C. Expose an Amazon EFS share to on-premises users to serve as the NAS server. Mount the same EFS Share to the webserver Amazon EC2 instances to serve the content

74
Q

Q94 A company runs an ordering system on AWS using Amazon SOS and AWS Lambda, with each order
received as a JSON message. Recently, the company had a marketing event that led to a tenfold
increase in orders. With thi􀃣 increase, the following undesired behaviors started in the ordering system:
• Lambda failures while processing orders lead to queue backlogs.
• The same orders have been processed multiple times.
A Solutions Architect has been asked to solve the existing issues with the ordering system and add the
following resiliency features:
• Retain problematic orders for analysis.
• Send notification if errors go beyond a threshold value
How should the Solutions Architect meet these requirements?
A. Receive multiple messages with each Lambda invocation. add error handling to message processing
code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification
B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the
queue, delete messages after processing, increase the message timer for the messages. use Amazon
CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda
errors for notification
C . Receive multiple messages with each Lambda invocation, use long polling when receiving the
messages, log the errors from the message processing code using Amazon CloudWalch Logs, create a
dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda
errors for notification
D. Receive multiple messages with each Lambda invocation, add error handling to message processing
code and delete messages after processing, increase the visibility timeout for the messages, create a
delay queue for messages that could not be processed create an Amazon CloudWatch metric on
Lambda errors for notification

A

A. Receive multiple messages with each Lambda invocation. add error handling to message processing
code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification

75
Q

Q93.A company wants to launch an online shopping website in multiple countries and must ensure that
customers are protected against potential “man-in-the-middle” attacks.
Which architecture will provide the MOST secure site access?
A. Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53
requests. Use AWS Certificate Manager {ACM) to register TLS/SSL certificates for the shopping website
and use Application Load Balancers configured with those TLS/SSL certificates for the site.
Use the Server Name Identification extension in all client requests to the site
B. Register 2048-bit encryption keys from a third-party certificate service. Use a third-party DNS provider
that uses the customer-managed keys for DNSSec. Upload the keys to ACM and use ACM to
automatically deploy the certificates for secure web services to an EC2 front-end web server fleet by
using NGINX. Use the Server Name Identification extension in all client requests to the site.
C. Use Route 53 for domain registration Register 2048-bit encryption keys from a third-party certificate
service. Use a third-party DNS service that supports DNSSEC for DNS requests that use the customer-managed keys. Import the customer-managed keys to ACM to deploy the certificates to Classic Load Balancers configured with those TLS/SSL certificates for the site. Use the Server Name Identification
extension in all client requests to the site.
D. Use Route 53 for domain registration and host the company DNS root servers on Amazon EC2
instances running Bind. Enable DNSSEC for DNS requests. Use ACM to register TLS/SSL certificates (or
the shopping website and use, (\pplication Load Balancers configured with those TLS/SSL certificates for
the site. Use the Server Name certification extension in all client requests to the site

A

A. Use Amazon Route 53 for domain registration and DNS services. Enable DNSSEC for all Route 53
requests. Use AWS Certificate Manager {ACM) to register TLS/SSL certificates for the shopping website
and use Application Load Balancers configured with those TLS/SSL certificates for the site.
Use the Server Name Identification extension in all client requests to the site

76
Q

092.
An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center.
Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively
developed, and serve little traffic
Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure
costs?
A. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load
balancer
B. Use AWS SMS to create AMls for each virtual machine and run them in Amazon EC2.
C. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an
Application Load Balancer
D. Use VM Import/Export to create AM ls for each virtual machine and run them in single-instance AWS
Elastic Beanstalk environments by configuring a custom image.

A

D. Use VM Import/Export to create AM ls for each virtual machine and run them in single-instance AWS
Elastic Beanstalk environments by configuring a custom image.

77
Q

Q91.A company has a legacy application running on servers on-premises. To increase the application’s
reliability, the company wants to login actionable insights using application logs. A Solutions Architect has
been given the following requirements for the solution:
Aggregate logs using AWS.
• Automate log analysis for errors
• Notify the Operations team when errors go beyond a specified threshold.
• What solution meets the requirements?
A. Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon
Kinesis Data Analytics to identify errors, create an Amazon C loudWatch alarm to notify the Operations
team of errors
B. Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify
errors, use Amazon CloudWatch Events to notify the Operations team or errors
C. Install Logslash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use
send mail lo notify the Operations team of errors
D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors

A

D. Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors

78
Q
A
79
Q
A
80
Q
A