Questions Flashcards

1
Q

An Amazon RDS database was created with encryption enabled using an AWS managed CMK. The database has been reclassified and no longer requires encryption. How can a Solutions Architect unencrypt the database with the LEAST operational overhead?
• ​
Create an unencrypted snapshot of the DB instance and create a new unencrypted DB instance from the snapshot.
• ​
Export the data from the DB instance and import the data into an unencrypted DB instance.
• ​
Disable encryption by running the CreateDBInstnace API operation and setting the StorageEncrypted parameter to false.
• ​
Create an unencrypted read replica of the encrypted DB instance and then promote the read replica to primary.

A

Explanation
The only way to unencrypt an encrypted database is to export the data and import the data into another DB instance. You cannot create unencrypted snapshots of encrypted DB instances and you cannot create unencrypted read replicas of an encrypted DB instance.
You also cannot modify the encrypted status of an existing DB instance using the API, CLI, or AWS Management Console.
CORRECT:”Export the data from the DB instance and import the data into an unencrypted DB instance” is the correct answer.
INCORRECT:”Create an unencrypted snapshot of the DB instance and create a new unencrypted DB instance from the snapshot” is incorrect as explained above.
INCORRECT:”Create an unencrypted read replica of the encrypted DB instance and then promote the read replica to primary” is incorrect as explained above.
INCORRECT:”Disable encryption by running the CreateDBInstnace API operation and setting the StorageEncrypted parameter to false” is incorrect as explained above.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

From

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A database for an eCommerce website was deployed on an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The database was running performantly for several weeks until a peak shopping period when customers experienced slow performance and timeouts. Amazon CloudWatch metrics indicate that reads and writes to the DB instance were experiencing long response times. Metrics show that CPU utilization is <50%, plenty of available memory, and sufficient free storage space. There is no evidence of database connectivity issues in the application server logs.
What could be the root cause of database performance issues?
• ​
A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instance.
• ​
A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
• ​
The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
• ​
The increased load resulted in the maximum number of allowed connections to the database instance.

A

Explanation
Baseline I/O performance for General Purpose SSD storage is 3 IOPS for each GiB, with a minimum of 100 IOPS. This relationship means that larger volumes have better performance. In this case the volume is only 100 GB so it will only have 300 IOPS performance.
When using General Purpose SSD storage, a DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit balance is enough to sustain a burst performance of 3,000 IOPS for 30 minutes. This balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications.
Volumes earn I/O credits at the baseline performance rate of 3 IOPS for each GiB of volume size. For example, a 100-GiB SSD volume has a baseline performance of 300 IOPS.
It is clear that in this scenario the increased load has caused the I/O credit balance to become exhausted before the end of the peak shopping period. This means that performance will be limited until there is sufficient I/O credit.
CORRECT:”A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase” is the correct answer.
INCORRECT:”The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries” is incorrect. Reading and writing items to the table should not result in indexes being rebuilt.
INCORRECT:”The increased load resulted in the maximum number of allowed connections to the database instance” is incorrect. MySQL on RDS can have up to 100,000 client connections. In this case the application servers are the clients and it is unlikely there are that many app servers.
INCORRECT:”A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instance” is incorrect. Based on the storage configuration presented it is far more likely that storage performance is the issue as in this setup it will be exhausted long before the network bandwidth.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-storage/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

An eCommerce company runs a successful website with a growing base of customers. The website is becoming popular internationally and demand is increasing quickly. The website is currently hosted in an on-premises data center with web servers and a MySQL database. The company plans to migrate the workloads to AWS. A Solutions Architect has been asked to create a solution that:
- Improves security
- Improves reliability
- Improves availability
- Reduces latency
- Reduces maintenance
Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)
• ​
Launch Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.
• ​
Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
• ​
Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security.
• ​
Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security.
• ​
Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.
• ​
Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.

A

Explanation
This is a simple migration to cloud that requires a standard set of security, performance, and reliability requirements. To meet these requirements an ASG should be created across multiple AZs for the web layer. This should be behind an ALB for distributing incoming connections.
For the database layer an Aurora MySQL DB cluster with an Aurora Replica in another AZ will provide Multi-AZ failover. This ensures the DB layer is highly available, and reduces maintenance.
Another way to improve performance for global users is to host static content in Amazon S3 and use the Amazon CloudFront CDN to cache the content in Edge Locations around the world. Adding AWS WAF adds additional security.
CORRECT:”Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer” is a correct answer.
CORRECT:”Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ” is a correct answer.
CORRECT:”Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security” is a correct answer.
INCORRECT:”Launch Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster” is incorrect. This requires more maintenance to maintain so is not the best solution.
INCORRECT:”Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security” is incorrect. Transfer Acceleration is for uploading data using the CloudFront Edge network. For serving static assets use a CloudFront distribution.
INCORRECT:”Migrate the database to a single-AZ Amazon RDS for MySQL DB instance” is incorrect. This does not provide the availability required, deploying an Aurora Replica in another AZ provides high availability.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-awswaf.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-compute/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An application uses Amazon EC2 instances in an Auto Scaling group and an Amazon RDS MySQL database. The web application has occasional spikes of traffic during the day. The operations team have determined the most appropriate instances sizes for both the EC2 instances and the DB instance. All instances use On-Demand pricing.
What of the following steps can be taken to gain the most cost savings without impacting the reliability of the application?
• ​
Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database.
• ​
Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.

• ​
Use On-Demand pricing for the RDS database and use Spot pricing for the EC2 instances in the Auto Scaling group
• ​
Use Spot instance pricing for the RDS database and the EC2 instances in the Auto Scaling group.

A

Explanation
The best cost saving measure is to reserve capacity for the RDS database as it should be adequately sized to handle any small bursts of traffic (RDS must have vertical capacity, or you must offload reads).
For Amazon EC2 instance, a combination of reserved instances and on-demand is the best option. The reserved instances should be used for the steady state usage requirement, and on-demand can be used to handle additional instances launched during busy periods.
CORRECT:”Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running” is the correct answer.
INCORRECT:”Use Spot instance pricing for the RDS database and the EC2 instances in the Auto Scaling group” is incorrect. Spot instances can be terminated when AWS need the capacity back, this could impact the reliability of the application.
INCORRECT:”Use On-Demand pricing for the RDS database and use Spot pricing for the EC2 instances in the Auto Scaling group” is incorrect. This is not the most cost-effective choice and leaves the web application vulnerable to instance termination.
INCORRECT:”Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database” is incorrect. This option does not cater for the spikes in load and leaves the DB vulnerable to instance termination.
References:
https://aws.amazon.com/ec2/pricing/reserved-instances/
https://aws.amazon.com/ec2/spot/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-compute/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-cost-management/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The telecommunications company that you are working for will do a major public announcement for a new phone offer and it is expected that there would be millions of people who will access their website to get the new offer. Their e-commerce platform is running on an Auto Scaling group of On-Demand EC2 instances deployed across multiple Availability Zones. For the database tier, the platform is using an Amazon RDS database in a Multi-AZ deployments configuration. Their e-commerce site performs a high number of small reads and writes per second to handle customer transactions and relies on an eventual consistency model. The Operations team identified that there is read contention on RDS MySQL database after conducting a series of performance tests.
Which combination of options should you implement to provide a fast, cost-efficient, and scalable solution? (Select TWO.)
• ​
Migrate the database to Amazon Redshift and use its massively parallel query execution capability to improve the read performance of the application.
• ​
Set up Read Replicas in each Availability Zone.

• ​
Modify the Amazon RDS Multi-AZ deployments configuration to launch multiple standby database instances. Distribute the incoming traffic to the standby instances to improve the database performance.
• ​
Implement an in-memory cache using Amazon ElastiCache

• ​
Vertically scale your RDS MySQL Instance by upgrading its instance size with provisioned IOPS.

A

Explanation
Set up Read Replicas in each Availability Zone.
(Correct)
Implement an in-memory cache using Amazon ElastiCache
(Correct)

For this scenario, the optimal services to use areAmazon ElastiCacheandRDS Read Replicas. Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine) by allowing you to store the objects that are often read in cache. Moreover, with Redis’ support for advanced data structures, you can augment the database tier to provide features (such as leaderboard, counting, session and tracking) that are not easily achievable via databases in a cost-effective way.
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads.
You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora.

You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
To further maximize read performance, Amazon RDS for MySQL allows you to add table indexes directly to Read Replicas, without those indexes being present on the master.
Because read replicas can be promoted to master status, they are useful as part of a sharding implementation. To shard your database, add a read replica and promote it to master status, then, from each of the resulting DB Instances, delete the data that belongs to the other shard.
Setting up Read Replicas in each Availability Zoneis correct because Read Replicas are used to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads hence, improving the read performance.
Implementing an in-memory cache using Amazon ElastiCacheis correct because ElastiCache is an in-memory caching solution which reduces the load on the database and improves the read performance.
Migrating the database to Amazon Redshift and using its massively parallel query execution capability to improve the read performance of the applicationis incorrect because Amazon Redshift is more suitable for OLAP-type applications and not for online transaction processing (OLTP). Redshift is also not suitable to host your MySQL database.
Modifying the Amazon RDS Multi-AZ deployments configuration to launch multiple standby database instances and distributing the incoming traffic to the standby instances to improve the database performanceis incorrect because you cannot distribute the incoming traffic to the standby instances since these are not readable at all. These database instances are primarily used to improve the availability of your database and your application.
Vertically scaling your RDS MySQL Instance by upgrading its instance size with provisioned IOPSis incorrect because although upgrading the instance size may improve the read performance to a certain extent, it is not as scalable compared with Read Replicas or ElastiCache.

References:

https: //aws.amazon.com/elasticache/
https: //aws.amazon.com/rds/details/read-replicas/

Check out this Amazon Elasticache Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-elasticache/

Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A Solution Architect used the AWS Application Discovery Service to gather information about some on-premises database servers. The tool discovered an Oracle data warehouse and several MySQL databases. The company plans to migrate to AWS and the Solutions Architect must determine the best migration pattern for each database.
Which combination of migration patterns will reduce licensing costs and operational overhead? (Select TWO.)
• ​
Lift and shift the Oracle data warehouse to Amazon EC2 using AWS Snowball.
• ​
Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
• ​
Lift and shift the MySQL databases to Amazon EC2 using AWS Snowball.
• ​
Migrate the Oracle data warehouse to an Amazon ElastiCache for Redis cluster using AWS DMS.
• ​
Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS.

A

Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
(Correct)
Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS.
(Correct)
Explanation
In this scenario we must determine the best platform to run each database on and the best migration path to get there. Cost and operational overhead must be minimized.
The best solution for an Oracle data warehouse is to migrate it to Amazon RedShift which is a managed service that is designed to run data warehouses (relational DB for OLAP use cases). This will require the schema to be modified which means AWS SCT should be used, and AWS DMS can migrate the actual data.
For the MySQL databases these can be run on Amazon RDS for MySQL. This will provide a managed service and does not require modifications to the schema. Therefore, AWS DMS can be used without AWS SCT.
CORRECT:”Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS” is a correct answer.
CORRECT:”Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS” is also a correct answer.
INCORRECT:”Lift and shift the Oracle data warehouse to Amazon EC2 using AWS Snowball” is incorrect. There is no indication that bandwidth is an issue or the database is particularly large. Therefore, Snowball is not required. Also Amazon EC2 does not reduce operational overhead.
INCORRECT:”Lift and shift the MySQL databases to Amazon EC2 using AWS Snowball” is incorrect. Amazon EC2 does not reduce operational overhead so is not the best choice. As with the previous explanation, there’s no indication that Snowball is required.
INCORRECT:”Migrate the Oracle data warehouse to an Amazon ElastiCache for Redis cluster using AWS DMS” is incorrect. ElastiCache is mainly used for caching data in-memory from other databases and is not the best choice for a data warehouse. Also, SCT would be needed to modify the schema.
References:
https://aws.amazon.com/getting-started/hands-on/migrate-oracle-to-amazon-redshift/
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-mysql-database-to-amazon-rds-for-mysql.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-database/
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-migration-transfer/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company has recently adopted a hybrid cloud architecture which requires them to migrate their databases from their on-premises data center to AWS. One of their applications requires a heterogeneous database migration in which they need to transform their on-premises Oracle database to PostgreSQL. A schema and code transformation should be done first in order to successfully migrate the data.
Which of the following options is the most suitable approach to migrate the database in AWS?
• ​
Use a combination of AWS Data Pipeline service and CodeCommit to convert the source schema and code to match that of the target PostgreSQL database in RDS. Use AWS Batch with Spot EC2 instances to cost-effectively migrate the data from the source database to the target database in a batch process.
• ​
Use the AWS Serverless Application Model (SAM) service to transform your database to PostgreSQL using AWS Lambda functions. Migrate the database to RDS using the AWS Database Migration Service (DMS).
• ​
Use the AWS Schema Conversion Tool (SCT) to convert the source schema to match that of the target database. Migrate the data using the AWS Database Migration Service (DMS) from the source database to an Amazon RDS for PostgreSQL database.
• ​
Migrate the database from your on-premises data center using the AWS Server Migration Service (SMS). Afterwards, use the AWS Database Migration Service to convert and migrate your data to Amazon RDS for PostgreSQL database.

A

Use the AWS Schema Conversion Tool (SCT) to convert the source schema to match that of the target database. Migrate the data using the AWS Database Migration Service (DMS) from the source database to an Amazon RDS for PostgreSQL database.
(Correct)

Explanation
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database. It can also move data between SQL, NoSQL, and text-based targets.

In heterogeneous database migrations, the source and target databases engines are different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft SQL Server to MySQL migrations. In this case, the schema structure, data types, and database code of source and target databases can be quite different, requiring a schema and code transformation before the data migration starts. That makes heterogeneous migrations a two-step process.
Firstuse the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database.All the required data type conversions will automatically be done by the AWS Database Migration Service during the migration. The source database can be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.
The option that says:Migrate the database from your on-premises data center using the AWS Server Migration Service (SMS). Afterwards, use the AWS Database Migration Service to convert and migrate your data to Amazon RDS for PostgreSQL databaseis incorrect because the AWS Server Migration Service (SMS) is primarily used to migrate virtual machines such as VMware vSphere and Windows Hyper-V. Although it is correct to use AWS Database Migration Service (DMS) to migrate the database, this option is still wrong because you should use the AWS Schema Conversion Tool to convert the source schema.
The option that says:Use a combination of AWS Data Pipeline service and CodeCommit to convert the source schema and code to match that of the target PostgreSQL database in RDS. Use AWS Batch with Spot EC2 instances to cost-effectively migrate the data from the source database to the target database in a batch processis incorrect because AWS Data Pipeline is primarily used to quickly and easily provision pipelines that remove the development and maintenance effort required to manage your daily data operations which lets you focus on generating insights from that data. Although you can use this to connect your data on your on-premises data center, it is not the most suitable service to use, compared with AWS DMS.
The option that says:Use the AWS Serverless Application Model (SAM) service to transform your database to PostgreSQL using AWS Lambda functions. Migrate the database to RDS using the AWS Database Migration Service (DMS)is incorrect because the Serverless Application Model (SAM) is an open-source framework that is primarily used to build serverless applications on AWS, and not for database migration.

References:

https: //aws.amazon.com/dms/
https: //aws.amazon.com/cloud-migration/

Check out these cheat sheets on AWS Database Migration Service and other common strategies for cloud migration:

https: //tutorialsdojo.com/aws-cheat-sheet-aws-database-migration-service/
https: //tutorialsdojo.com/aws-cheat-sheet-aws-migration-strategies-the-6-rs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company requires federated access to AWS for users of a mobile application. The security team has mandated that the application must use a custom-built solution for authenticating users and use IAM roles for authorization.
Which of the following actions would enable authentication and authorization and satisfy the requirements? (Select TWO.)
• ​
Use a custom-built SAML-compatible solution for authentication and use AWS SSO for authorization.
• ​
Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
• ​
Use a custom-built OpenID Connect-compatible solution with AWS SSO for authentication and authorization.
• ​
Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Use a token-based Lambda authorizer that uses JWT.


Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the IAM identity provider.

A

Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
(Correct)
Explanation
There are two possible solutions for this scenario:
· An OpenID Connect provider can be added in IAM to enable federated authentication. An Amazon Cognito identity pool can then be used for authorization. Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token.
· AWS supports identity federation with SAML 2.0. This feature enables federated single sign-on (SSO), so users can log into the AWS Management Console or call the AWS API operations without you having to create an IAM user for everyone in your organization. In your organization’s IdP, you define assertions that map users or groups in your organization to the IAM roles.
The diagram below shows how the authentication and authorization process works when assuming a role using SAML:

CORRECT:”Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization” is a correct answer.
CORRECT:”Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the IAM identity provider” is also a correct answer.
INCORRECT:”Use a custom-built SAML-compatible solution for authentication and use AWS SSO for authorization” is incorrect. AWS SSO cannot be used for mobile applications.
INCORRECT:”Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Use a token-based Lambda authorizer that uses JWT” is incorrect. This is not a complete solution and API Gateway is not required for this solution.
INCORRECT:”Use a custom-built OpenID Connect-compatible solution with AWS SSO for authentication and authorization” is incorrect. AWS SSO cannot be used for mobile applications.
References:
https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-security-identity-compliance/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A top university has launched its serverless online portal using Lambda and API Gateway in AWS that enables its students to enroll, manage their class schedule, and see their grades online. After a few weeks, the portal abruptly stopped working and lost all of its data. The university hired an external cyber security consultant and based on the investigation, the outage was due to an SQL injection vulnerability on the portal’s login page in which the attacker simply injected the malicious SQL code. You also need to track historical changes to the rules and metrics associated to your firewall.
Which of the following is the most suitable and cost-effective solution to avoid another SQL Injection attack against their infrastructure in AWS?
• ​
Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
• ​
Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
• ​
Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
• ​
Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.

A

Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
(Correct)

Explanation
In this scenario, the correct answer is to deploy WAF in front of the API Gateway, hence the correct answer is the option that says:Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. With AWS Config, you can track changes to WAF web access control lists (web ACLs). For example, you can record the creation and deletion of rules and rule actions, as well as updates to WAF rule configurations.

AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.
The option that says:Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurationsis incorrect because you have to use AWS WAF in front of the API Gateway and not directly to the Lambda functions. AWS Firewall Manager is primarily used to manage your Firewall across multiple AWS accounts under your AWS Organizations and hence, it is not suitable for tracking changes to WAF web access control lists. You should use AWS Config instead.
The option that says:Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurationsis incorrect because even though it is valid to use AWS WAF with CloudFront, it entails an additional and unnecessary cost to launch a CloudFront distribution for this scenario. There is no requirement that the serverless online portal should be scalable and be accessible around the globe hence, a CloudFront distribution is not relevant.
The option that says:Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurationsis incorrect because launching a new Application Load Balancer entails additional cost and is not cost-effective. In addition, AWS Firewall manager is primarily used to manage your Firewall across multiple AWS accounts under your AWS Organizations. Using AWS Config is much more suitable for tracking changes to WAF web access control lists.

References:

https: //aws.amazon.com/waf/
https: //docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html

Check out this AWS WAF Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-waf/

Tutorials Dojo’s AWS Certified Solutions Architect Professional Exam Study Guide:
https://tutorialsdojo.com/aws-cheat-sheet-aws-certified-solutions-architect-professional/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You are the Lead Solutions Architect for an IT consulting firm which has various teams and departments that have been grouped into several organizational units (OUs) using AWS Organizations. You received a report from the security team that there was a suspected breach in your environment where a third-party AWS account was suddenly added to your organization without any prior approval. The external account has high level access privileges to the accounts that you own but luckily, no detrimental action was performed.
What should you do to properly set up a monitoring system than notifies you for any changes to your AWS accounts? (Select TWO.)
• ​
Set up a CloudWatch Dashboard to monitor any changes to your organizations and create an SNS topic that would send you a notification.
• ​
Provision an AWS-approved third-party monitoring tool from the AWS Marketplace that would send alerts if a breach was detected. Use AWS GuardDuty to analyze any possible breach and notify the administrators using AWS SNS.
• ​
Create a trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including calls from the AWS Organizations console and from code calls to the AWS Organizations APIs. Use CloudWatch Events and SNS to raise events when administrator-specified actions occur in an organization and send a notification to you.
• ​
Monitor all changes to your organization using Systems Manager and use CloudWatch Events to notify you for any new activity to your account.
• ​
Use AWS Config to monitor the compliance of your AWS Organizations. Set up an SNS Topic or CloudWatch Events that will send alerts to you for any changes.

A

• ​
Create a trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including calls from the AWS Organizations console and from code calls to the AWS Organizations APIs. Use CloudWatch Events and SNS to raise events when administrator-specified actions occur in an organization and send a notification to you.
(Correct)
• ​
Use AWS Config to monitor the compliance of your AWS Organizations. Set up an SNS Topic or CloudWatch Events that will send alerts to you for any changes.
(Correct)

Explanation
AWS Organizations can work with CloudWatch Events to raise events when administrator-specified actions occur in an organization. For example, because of the sensitivity of such actions, most administrators would want to be warned every time someone creates a new account in the organization or when an administrator of a member account attempts to leave the organization. You can configure CloudWatch Events rules that look for these actions and then send the generated events to administrator-defined targets. Targets can be an Amazon SNS topic that emails or text messages its subscribers. Combining this with Amazon CloudTrail, you can set an event to trigger whenever a matching API call is received.
Multi-account, multi-region data aggregation in AWS Config enables you to aggregate AWS Config data from multiple accounts and regions into a single account. Multi-account, multi-region data aggregation is useful for central IT administrators to monitor compliance for multiple AWS accounts in the enterprise. An aggregator is a new resource type in AWS Config that collects AWS Config data from multiple source accounts and regions
Hence, the following options are the correct answers in this scenario:
1. Create a trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including calls from the AWS Organizations console and from code calls to the AWS Organizations APIs. Use CloudWatch Events and SNS to raise events when administrator-specified actions occur in an organization and send a notification to you.
2. Use AWS Config to monitor the compliance of your AWS Organizations. Set up an SNS Topic or CloudWatch Events that will send alerts to you for any changes.

Monitoring all changes to your organization using Systems Manager and using CloudWatch Events to notify you for any new activity to your accountis incorrect because AWS Systems Manager is a collection of capabilities for configuring and managing your Amazon EC2 instances, on-premises servers and virtual machines, and other AWS resources at scale. This can’t be used to monitor the changes to the set up of AWS Organizations.
Setting up a CloudWatch Dashboard to monitor any changes to your organizations and creating an SNS topic that would send you a notificationis incorrect because a CloudWatch Dashboard is primarily used to monitor your AWS resources and not the configuration of your AWS Organizations. Although you can enable sharing of all CloudWatch Events across all accounts in your organization, this can’t be used to monitor if there is a new AWS account added to your AWS Organizations. Most of the time, the Amazon CloudWatch Events service is primarily used to monitor your AWS resources and the applications you run on AWS in real time.
Provisioning an AWS-approved third-party monitoring tool from the AWS Marketplace that would send alerts if a breach was detected, then using AWS GuardDuty to analyze any possible breach and notifying the administrators using AWS SNSis incorrect because this option entails a lot of configuration, which is not fit for the scenario. GuardDuty might not determine similar future incidents as malicious if it was performed by an authenticated user already within the organization.

References:

https: //docs.aws.amazon.com/organizations/latest/userguide/orgs_monitoring.html
https: //docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_cwe.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company runs an application in an on-premises data center that uses an IBM Db2 database. The web application calls an API that runs stored procedures on the database to retrieve read-only data. The dataset is constantly updated. Users have reported significant latency when attempting to retrieve data. The company are concerned about Db2 CPU licensing costs and the performance of the database.
Which approach should a Solutions Architect take to migrate to AWS and resolve these concerns?
• ​
Export data on a daily basis and upload to Amazon S3. Refactor the API to use the S3 data. Implement Amazon API Gateway and enable API caching.
• ​
Use local storage to cache query output. Use S3 COPY commands to sync the dataset to Amazon S3. Refactor the API to use Amazon EFS. Implement Amazon API Gateway and enable API caching.
• ​
Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching.
• ​
Rehost the Db2 database to an Amazon EC2 instance. Migrate all the data. Enable caching using an instance store. Refactor the API to use the Amazon EC2 Db2 database. Implement Amazon API Gateway and enable API caching.

A

• ​
Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching.
(Correct)

The AWS Database Migration Service (DMS) can be used to migrate from IBM Db2 to targets including relational databases (such as Oracle and Amazon Aurora), a data warehouse (Amazon Redshift), a NoSQL database (Amazon DynamoDB), or an Amazon S3 bucket.
You can create an AWS DMS task that captures ongoing changes to the source data store. You can do this capture while you are migrating your data. You can also create a task that captures ongoing changes after you complete your initial (full-load) migration to a supported target data store.
This process is called ongoing replication or change data capture (CDC). AWS DMS uses this process when replicating ongoing changes from a source data store. This process works by collecting changes to the database logs using the database engine’s native API.

CORRECT:”Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching” is the correct answer.
INCORRECT:”Rehost the Db2 database to an Amazon EC2 instance. Migrate all the data. Enable caching using an instance store. Refactor the API to use the Amazon EC2 Db2 database. Implement Amazon API Gateway and enable API caching” is incorrect. This solution does not include a method of synchronizing the data changes.
INCORRECT:”Use local storage to cache query output. Use S3 COPY commands to sync the dataset to Amazon S3. Refactor the API to use Amazon EFS. Implement Amazon API Gateway and enable API caching” is incorrect. You cannot refactor an API to use EFS as EFS is a file store service and must be mounted to an EC2 instance.
INCORRECT:”Export data on a daily basis and upload to Amazon S3. Refactor the API to use the S3 data. Implement Amazon API Gateway and enable API caching” is incorrect. This solution does not use constant replication for the dataset so the dataset could be out of date when it is queried.
References:
https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now-support-ibm-db2-as-a-source/
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-certified-solutions-architect-professional/aws-migration-transfer/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly