AWS Solutions Architect Associate Flashcards

1
Q

What is a proper definition of an IAM Role?

1) IAM Users in multiple User Groups
2) An IAM entity that defines a password policy for IAM users
3) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service
4) Permissions assigned to IAM Users to perform actions

A

3) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service

Some AWS services need to perform actions on your behalf. To do so, you assign permissions to AWS services with IAM Roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following is an IAM Security Tool?

1) IAM Credentials Report
2) IAM Root Account Manager
3) IAM Services Report
4) IAM Security Advisor

A

1) IAM Credentials Report

IAM Credentials report lists all your AWS Account’s IAM Users and the status of their various credentials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which answer is INCORRECT regarding IAM Users?

1) IAM Users can belong to multiple User Groups
2) IAM Users don’t have to belong to a User Group
3) IAM Policies can be attached directly to IAM Users
4) IAM Users access AWS services using root account credentials

A

4) IAM Users access AWS services using root account credentials

IAM Users access AWS services using their own credentials (username & password or Access Keys).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is an IAM best practice?

1) Create several IAM Users for one physical person
2) Don’t use the root user account
3) Share your AWS account credentials with your colleague, so they can perform a task for you
4) Do not enable MFA for easier access

A

2) Don’t use the root user account

Use the root account only to create your first IAM User and a few account/service management tasks. For everyday tasks, use an IAM User.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are IAM Policies?

1) A set of policies that defines how AWS accounts interact with each other
2) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles
3) A set of policies that define a password for IAM Users
4) A set of policies defined by AWS that show how customers interact with AWS

A

2) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is tenancy in regards to EC2?

A

Tenancy defines how EC2 instances are distributed across physical hardware and affects pricing. There are three tenancy options available:

1) Shared (default) — Multiple AWS accounts may share the same physical hardware.

2) Dedicated Instance (dedicated) — Your instance runs on single-tenant hardware.

3) Dedicated Host (host) — Your instance runs on a physical server with EC2 instance capacity fully dedicated to your use, an isolated server with configurations that you can control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which principle should you apply regarding IAM Permissions?
1) Grant most privilege
2) Grant more permissions if your employee asks you to
3) Grant least privilege
4) Restrict root account permissions

A

3) Grant least privilege

Don’t give more permissions than the user needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What should you do to increase your root account security?

1) Remove permissions from the root account
2) Only access AWS services through AWS Command Line Interface (CLI)
3) Don’t create IAM Users, only access you AWS account using the root account
4) Enable MFA

A

4) Enable MFA

When you enable MFA, this adds another layer of security. Even if your password is stolen, lost, or hacked your account is not compromised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

IAM User Groups can contain IAM Users and other User Groups.

True
False

A

False

IAM User Groups can contain only IAM Users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An IAM policy consists of one or more statements. A statement in an IAM Policy consists of the following, EXCEPT:

1) Effect
2) Principal
3) Version
4) Action
5) Resource

A

3) Version

A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.

{
“Version”: “2012-10-17”,
“Statement”: [{
“Sid”: “1”,
“Effect”: “Allow”,
“Principal”: {“AWS”: [“arn:aws:iam::account-id:root”]},
“Action”: “s3:”,
“Resource”: [
“arn:aws:s3:::mybucket”,
“arn:aws:s3:::mybucket/

]
}]
}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have strong regulatory requirements to only allow fully internally audited AWS services in production. You still want to allow your teams to experiment in a development environment while services are being audited. How can you best set this up?

1) Provide the Dev team with a completely independent AWS account
2) Apply a global IAM policy on your Prod account
3) Create an AWS Organization and create 2 Prod and Dev OUs, then apply an SCP on the Prod OU
4) Create an AWS Config Rule

A

3) Create an AWS Organization and create 2 Prod and Dev OUs, then Apply an SCP on the Prod OU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You are managing the AWS account for your company, and you want to give one of the developers access to read files from an S3 bucket. You have updated the bucket policy to this, but he still can’t access the files in the bucket. What is the problem?

{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "AllowsRead",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Dave"
        },
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::static-files-bucket-xxx"
     }]
}

1) Everything is okay, he just needs to logout and login again
2) The bucket does not contain any files yet
3) You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object level permission

A

3) You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object level permission

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have 5 AWS Accounts that you manage using AWS Organizations. You want to restrict access to certain AWS services in each account. How should you do that?
1) Using IAM Roles
2) Using AWS Organizations SCP
3) Using AWS Config

A

2) Using AWS Organizations SCP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following IAM condition key you can use only to allow API calls to a specified AWS region?

1) aws:RequiredRegion
2) aws SourceRegion
3) aws:InitialRegion
4) aws:RequestedRegion

A

4) aws:RequestedRegion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When configuring permissions for EventBridge to configure a Lambda function as a target you should use ………………….. but when you want to configure a Kinesis Data Streams as a target you should use …………………..

1) Identity-Based Policy, Resource-Based Policy
2) Resource-Based Policy, Identity-Based Policy
3) Identity-Based Policy, Identity-Based Policy
4) Resource-Based Policy, Resource-Based Policy

A

2) Resource-Based Policy, Identity-Based Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which AWS Directory Service is best suited for an organization looking to extend their existing on-premises Active Directory to the AWS Cloud without replicating their AD data?

1) AWS Managed Microsoft AD
2) AD Connector
3) Simple AD
4) Amazon Cognito

A
  1. AD Connector

AD Connector acts as a proxy to redirect directory requests to your existing on-premises Active Directory, allowing you to manage AWS resources without replicating your AD data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A company requires a fully managed, highly available, and scalable Active Directory service in AWS to support their Windows-based applications. Which AWS Directory Service should they use?

A. Simple AD
B. Amazon Cognito
C. AWS Managed Microsoft AD
D. AD Connector

A

C. AWS Managed Microsoft AD

AWS Managed Microsoft AD is a full-fledged Active Directory managed by AWS, ideal for Windows-based applications and complex AD tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which AWS Directory Service offers a cost-effective solution for small to medium-sized businesses that need basic AD capabilities such as domain joining and group policies?

A. AWS Managed Microsoft AD
B. Amazon Cognito
C. AD Connector
D. Simple AD

A

D. Simple AD

Simple AD is a Samba-based, AD-compatible service that provides basic Active Directory features, making it suitable for smaller businesses with basic directory service needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

An organization wants to use its existing server-bound software licenses (such as Windows Server and SQL Server) within AWS. Which AWS Directory Service supports Bring Your Own License (BYOL) compatibility?

A. AWS Managed Microsoft AD
B. AD Connector
C. Amazon Cognito
D. Simple AD

A

A. AWS Managed Microsoft AD

AWS Managed Microsoft AD allows for Bring Your Own License (BYOL) compatibility, enabling the use of existing server-bound software licenses within AWS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An organization wants to ensure that their IAM policies allow access to an S3 bucket only if the requests are coming from IP addresses within their corporate network. Which IAM policy condition key should be used to achieve this?

A. aws:SourceIp
B. aws:SourceArn
C. aws:UserAgent
D. aws:SecureTransport

A

A. aws:SourceIp

The aws:SourceIp condition key in IAM policies is used to specify the IP address or IP address range from which the requests are allowed or denied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company wants to restrict access to their AWS resources, ensuring that API calls are only made using HTTPS. Which IAM policy condition key should be utilized to enforce this policy?

A. aws:SecureTransport
B. aws:SourceIp
C. aws:UserAgent
D. aws:RequestTime

A

A. aws:SecureTransport

The aws:SecureTransport condition key is used in IAM policies to check whether the request was sent using SSL (HTTPS).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How can an AWS Solutions Architect restrict IAM user access to resources based on the user’s tagged department, such as only allowing access to resources tagged with “Department”: “Finance”?

A. Use the aws:RequestTag/Department condition key.
B. Use the aws:TagKeys condition key.
C. Use the aws:ResourceTag/Department condition key.
D. Use the aws:User/Department condition key.

A

C. Use the aws:ResourceTag/Department condition key

The aws:ResourceTag/Department condition key in IAM policies allows for the specification of conditions based on the tags on the AWS resource being accessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

To comply with regulatory requirements, a Solutions Architect needs to ensure that IAM users can only modify AWS resources if they use a specific client application. Which IAM condition key can be used to enforce this policy?

A. aws:SourceIp
B. aws:UserAgent
C. aws:RequestTag/Client
D. aws:CalledVia

A

B. aws:UserAgent

The aws:UserAgent condition key allows policies to specify conditions based on the client application identified in the user agent string of the request.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

An organization wants to enhance the security of their AWS environment by ensuring that certain sensitive actions, like terminating EC2 instances, can only be performed by users who have authenticated using Multi-Factor Authentication (MFA). Which IAM policy condition key should be used to enforce this security requirement?

A. aws:MultiFactorAuthPresent
B. aws:SecureTransport
C. aws:TokenIssueTime
D. aws:UserAgent

A

A. aws:MultiFactorAuthPresent

The aws:MultiFactorAuthPresent condition key in IAM policies is used to verify whether the requester has authenticated with Multi-Factor Authentication (MFA). This condition can be set to true to enforce that the specified action is allowed only when the user is MFA-authenticated, enhancing the security for sensitive operations.

Example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "ec2:TerminateInstances",
      "Resource": "*",
      "Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}}
    }
  ]
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a primary feature of AWS IAM Identity Center?

A. It allows for the creation and management of AWS resources.
B. It is a web service for organizing and federating access to AWS accounts and business applications.
C. It is used for hardware-based key storage for cryptographic operations.
D. It offers a centralized service for billing and cost management of AWS resources.

A

B. It is a web service for organizing and federating access to AWS accounts and business applications.

AWS IAM Identity Center is designed to help manage access to AWS accounts and business applications, providing a single location to centrally manage access. It allows users to sign in once and access multiple accounts and applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which feature does AWS IAM Identity Center (SSO) provide to enhance security and streamline user access management across multiple AWS accounts?

A. Multi-factor authentication for each AWS account separately.
B. Centralized user access to multiple AWS accounts with a single set of credentials.
C. Automated encryption and decryption of AWS secrets.
D. Direct control over the underlying EC2 instances running IAM services.

A

B. Centralized user access to multiple AWS accounts with a single set of credentials.

AWS IAM Identity Center allows users to access multiple AWS accounts and applications using a single set of credentials, thereby centralizing and streamlining user access management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

In AWS IAM Identity Center, which of the following best describes the function of permission sets?

A. They are used to assign EC2 instance types to users.
B. They define the IAM roles that users can assume in AWS accounts.
C. They encrypt data stored in S3 buckets.
D. They monitor and log user activity within the AWS Management Console.

A

B. They define the IAM roles that users can assume in AWS accounts

In AWS IAM Identity Center, permission sets are used to define the IAM roles that can be assumed by users when accessing AWS accounts. This allows for fine-grained access control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How does AWS IAM Identity Center integrate with existing corporate directories?

A. It replaces the need for a corporate directory with its own user database.
B. It provides a physical storage solution for corporate directory data.
C. It enables integration with existing directories like Microsoft Active Directory for user authentication.
D. It only integrates with AWS Managed Microsoft AD and not with any on-premises directories.

A

C. It enables integration with existing directories like Microsoft Active Directory for user authentication.

AWS IAM Identity Center supports integration with existing corporate directories, such as Microsoft Active Directory, to authenticate and manage user access, allowing for a seamless connection between AWS and existing user management systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?

1) Convertible Reserved Instances
2) Dedicated Hosts
3) Spot Instances

A

3) Spot Instances

Spot Instances are good for short workloads and this is the cheapest EC2 Purchasing Option. But, they are less reliable because you can lose your EC2 instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What should you use to control traffic in and out of EC2 instances?

1) Network Access Control List (NACL)
2) Security Groups
3) IAM Policies

A

2) Security Groups

Security Groups operate at the EC2 instance level and can control traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How long can you reserve an EC2 Reserved Instance?

1) 1 or 3 years
2) 2 or 4 years
3) 6 months or 1 year
4) Anytime between 1 and 3 years

A

1) 1 or 3 years

EC2 Reserved Instances can be reserved for 1 or 3 years only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?

1) Storage Optimized
2) Compute Optimized
3) Memory Optimized
4) General Purpose

A

2) Compute Optimized

Compute Optimized EC2 instances are great for compute-intensive workloads requiring high-performance processors (e.g., batch processing, media transcoding, high-performance computing, scientific modeling & machine learning, and dedicated gaming servers).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?

1) Reserved Instances
2) Spot Instances
3) On-Demand Instances

A

1) Reserved Instances

Reserved Instances are good for long workloads. You can reserve EC2 instances for 1 or 3 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?

1) Connect to each EC2 instance using SSH, then install the required software and update your OS packages manually
2) Write a bash script that installs the required software and updates to your OS, then contact AWS Support and provide them with the script. They will run it on your EC2 instances at launch
3) Write a bash script that installs the required software and updates to your OS, then use this script in the EC2 User Data when you launch your EC2 instances

A

3) EC2 User Data

EC2 User Data is used to bootstrap your EC2 instances using a bash script. This script can contain commands such as installing software/packages, download files from the Internet, or anything you want.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?

1) Storage Optimized
2) Compute Optimized
3) Memory Optimized
4) General Purpose

A

3) Memory Optimized

Memory Optimized EC2 instances are great for workloads requiring large data sets in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database has thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?

1) Storage Optimized
2) Compute Optimized
3) Memory Optimized
4) General Purpose

A

1) Storage Optimized

Storage Optimized EC2 instances are great for workloads requiring high, sequential read/write access to large data sets on local storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Security Groups can be attached to only one EC2 instance.

True
False

A

False

Security Groups can be attached to multiple EC2 instances within the same AWS Region/VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You’re planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?

1) Convertible Reserved Instances
2) Dedicated Hosts
3) Spot Instances

A

2) Dedicated Hosts

Dedicated Hosts are good for companies with strong compliance needs or for software that have complicated licensing models. This is the most expensive EC2 Purchasing Option available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?

1) Spot Instances
2) On-Demand
3) Dedicated Hosts
4) Reserved Instances

A

3) Dedicated Hosts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Spot Fleet is a set of Spot Instances and optionally ……………

1) Dedicated Instances
2) On-Demand Instances
3) Dedicated Hosts
4) Reserved Instances

A

2) On-Demand Instances

Spot Fleet is a set of Spot Instances and optionally On-demand Instances. It allows you to automatically request Spot Instances with the lowest price.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

You have an e-commerce website and you are preparing for Black Friday which is the biggest sale of the year. You expect that your traffic will increase by 100x. Your website already using an SQS Standard Queue, and you’re running a fleet of EC2 instances in an Auto Scaling Group to consume SQS messages. What should you do to prepare your SQS Queue?

1) Contact AWS Support to pre-warm your SQS Standard Queue
2) Enable Auto Scaling in your SQS queue
3) Increase the capacity of the SQS queue
4) Do nothing, SQS scales automatically

A

4) Do nothing, SQS scales automatically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You have an SQS Queue where each consumer polls 10 messages at a time and finishes processing them in 1 minute. After a while, you noticed that the same SQS messages are received by different consumers resulting in your messages being processed more than once. What should you do to resolve this issue?

1) Enable Long Polling
2) Add DelaySeconds parameter to the messages when being produced
3) Increase the Visibility Timeout
4) Decrease the Visibility Timeout

A

3) Increase the Visibility Timeout

SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Which SQS Queue type allows your messages to be processed exactly once and in order?

1) SQS Standard Queue
2) SQS Dead Letter Queue
3) SQS Delay Queue
4) SQS FIFO Queue

A

4) SQS FIFO Queue

SQS FIFO (First-In-First-Out) Queues have all the capabilities of the SQS Standard Queue, plus the following two features. First, The order in which messages are sent and received are strictly preserved and a message is delivered once and remains available until a consumer process and deletes it. Second, duplicated messages are not introduced into the queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

You have 3 different applications that you’d like to send them the same message. All 3 applications are using SQS. What is the best approach would you choose?

1) Use SQS Replication Feature
2) Use SNS + SQS Fan Out Pattern
3) Send messages Individually to 3 SQS queues

A

2) Use SNS + SQS Fan Out Pattern

This is a common pattern where only one message is sent to the SNS topic and then “fan-out” to multiple SQS queues. This approach has the following features: it’s fully decoupled, no data loss, and you have the ability to add more SQS queues (more applications) over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You have a Kinesis data stream with 6 shards provisioned. This data stream usually receiving 5 MB/s of data and sending out 8 MB/s. Occasionally, your traffic spikes up to 2x and you get a ProvisionedThroughputExceeded exception. What should you do to resolve the issue?

1) Add more Shards
2) Enable Kinesis Replication
3) Use SQS as a buffer to Kinesis

A

1) Add more Shards

The capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of reading data calls. Each shard allows for 1 MB/s incoming data and 2 MB/s outgoing data. You should increase the number of shards within your data stream to provide enough capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users’ data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?

1) There are too many shards, you should only use 1 shard
2) You shouldn’t use multiple consumers, only one and it should re-order data
3) For each record sent to Kinesis, use a partition key that represents the identity of the user

A

3) For each record sent to Kinesis, use a partition key that represents the identity of the user

Kinesis Data Stream uses the partition key associated with each data record to determine which shard a given data record belongs to. When you use the identity of each user as the partition key, this ensures the data for each user is ordered hence sent to the same shard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You are running an application that produces a large amount of real-time data that you want to load into S3 and Redshift. Also, these data need to be transformed before being delivered to their destination. What is the best architecture would you choose?

1) SQS + AWS Lambda
2) SNS + HTTP Endpoint
3) Kinesis Data Streams + Kinesis Data Firehose

A

3) Kinesis Data Streams + Kinesis Data Firehose

This is a perfect combo of technology for loading data near real-time data into S3 and Redshift. Kinesis Data Firehose supports custom data transformations using AWS Lambda.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Which of the following is NOT a supported subscriber for AWS SNS?

1) Amazon Kinesis Data Streams
2) Amazon SQS
3) HTTP(S) Endpoint
4) AWS Lambda

A

1) Amazon Kinesis Data Streams

Note: Kinesis Data Firehose is now supported, but not Kinesis Data Streams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Which AWS service helps you when you want to send email notifications to your users?

1) Amazon SQS with AWS Lambda
2) Amazon SNS
3) Amazon Kinesis

A

2) Amazon SNS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

You’re running many micro-services applications on-premises and they communicate using a message broker that supports MQTT protocol. You’re planning to migrate these applications to AWS without re-engineering the applications and modifying the code. Which AWS service allows you to get a managed message broker that supports the MQTT protocol?

1) Amazon SQS
2) Amazon SNS
3) Amazon Kinesis
4) Amazon MQ

A

4) Amazon MQ

Amazon MQ supports industry-standard APIs such as JMS and NMS, and protocols for messaging, including AMQP, STOMP, MQTT, and WebSocket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

An e-commerce company is preparing for a big marketing promotion that will bring millions of transactions. Their website is hosted on EC2 instances in an Auto Scaling Group and they are using Amazon Aurora as their database. The Aurora database has a bottleneck and a lot of transactions have been failed in the last promotion they have made as they had a lot of transaction and the Aurora database wasn’t prepared to handle these too many transactions. What do you recommend to handle those transactions and prevent any failed transactions?

1) Use SQS as a buffer to write to Aurora
2) Host the website in AWS Fargate instead of EC2 instances
3) Migrate Aurora to RDS for SQL Server

A

1) Use SQS as a buffer to write to Aurora

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A company is using Amazon Kinesis Data Streams to ingest clickstream data and then do some analytical processes on it. There is a campaign in the next few days and the traffic is unpredictable which might grow up to 100x. What Kinesis Data Stream capacity mode do you recommend?

1) Provisioned Mode
2) On-demand Mode

A

2) On-demand Mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What type of firewall can be used in conjunction with API Gateway to help prevent DDoS attacks?

1) Security group
2) Web Application Firewall (WAF)
3) Host firewall
4) NACL

A

2) Web Application Firewall (WAF)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

If your application needs to process 5,000 messages per second, which type of SQS queue would you use?

1) SQS Performance
2) SQS Standard
3) SQS FIFO
4) SQS paired with an EC2 Auto Scaling Group

A

2) SQS Standard

Auto Scaling groups are for EC2 instances only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

You need to create a new message broker application in AWS. The new application needs to support the JMS messaging protocol. Which service fits your needs?

1) Amazon MQ
2) Amazon SQS
3) Amazon SNS
4) Amazon MSK

A

1) Amazon MQ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Which of the following endpoints can use a custom delivery policy to define how Amazon SNS retries the delivery of messages when server-side errors occur?

1) Email
2) SMS
3) HTTP/S
4) It can’t retry messages

A

3) HTTP/S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Which service allows for bidirectional data flows between AWS and SaaS applications?

1) AWS AppSync
2) Amazon S3 Replication
3) Amazon AppFlow
4) Amazon MSK

A

3) Amazon AppFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Which tool can be used to sideline malformed SQS messages?

1) It can’t be done
2) Side-letter queues (SLQ)
3) Alive-letter queues (ALQ)
4) Dead-letter queues (DLQ)

A

4) Dead-letter queues (DLQ)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Which AWS service would you choose for serverless orchestration of long-running (up to 1 year) workflows that can integrate with several other AWS services?

1) AWS Step Functions
2) AWS EC2 Spot Instances
3) AWS Lambda
4) Amazon MQ

A

1) AWS Step Functions

This is a serverless orchestration service combining different AWS services for business applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Which layers of our applications need to be loosely coupled?

1) Internal and external
2) Just internal
3) None — it’s bad practice to loosely couple applications
4) Just external

A

1) Internal and external

All levels of your architecture need to be loosely coupled!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What is the largest message size you can store in SQS?

1) 256KB
2) 512KB
3) 128KB
4) 1MB

A

1) 256KB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can’t access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?

1) Allocate an Elastic IP and assign it to your EC2 instance
2) From inside your EC2 instance OS, change network configuration from DHCP to static and assign it a public IPv4
3) Contact AWS Support and request a fixed public IPv4 to your EC2 instance
4) This can’t be done, you can only assign a fixed private IPv4 to your EC2 instance

A

1) Allocate an Elastic IP and assign it to your EC2 instance

Elastic IP is a public IPv4 that you own as long as you want and you can attach it to one EC2 instance at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?

1) Spread Placement Group
2) Cluster Placement Group
3) Partition Placement Group

A

2) Cluster Placement Group

Cluster Placement Groups place your EC2 instances next to each other which gives you high-performance computing and networking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there’s an AZ failure. Which EC2 Placement Group should you choose?

1) Spread Placement Group
2) Cluster Placement Group
3) Partition Placement Group

A

1) Spread Placement Group

Spread Placement Group places your EC2 instances on different physical hardware across different AZs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.

True
False

A

False

Elastic Network Interfaces (ENIs) are bounded to a specific AZ. You can not attach an ENI to an EC2 instance in a different AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

The following are true regarding EC2 Hibernate, EXCEPT:

1) EC2 Instance Root Volume must be an Instance Store volume
2) Supports On-Demand and Reserved Instances
3) EC2 Instance RAM must be less than 150 GB
4) EC2 Instance Root Volume type must be an EBS volume

A

1) EC2 Instance Root Volume must be an Instance Store volume

To enable EC2 Hibernate, the EC2 Instance Root Volume type must be an EBS volume and must be encrypted to ensure the protection of sensitive content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

When would you need to create an EC2 Dedicated Instance?

1) When you need to make sure that AWS support can assist you with a hardware failure
2) When you have an auditing requirement to run your hosts on single-tenant hardware
3) When you want to ensure that your instance will never fail
4) When you need the cheapest price for an instance

A

2) When you have an auditing requirement to run your hosts on single-tenant hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What is EC2 metadata commonly used for?

1) To configure your Security Groups
2) When your code needs to learn something about the EC2 instances that it’s running on
3) When an S3 bucket needs to see where an object was uploaded from
4) To determine how long an instance has been online, which AWS uses to calculate your bill

A

2) When your code needs to learn something about the EC2 instances that it’s running on

EC2 instance metadata can be used to configure or manage a running instance, and can also be used to access user data that was specified when the instance was launched

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What does AWS Outposts do?

1) A remote monitoring tool used to monitor your private cloud
2) Allows you to extend your data center to AWS GovCloud
3) Edge computing device designed for airplanes
4) Allows you to extend the power of the AWS data center to your own data center

A

4) Allows you to extend the power of the AWS data center to your own data center

Outposts allows you to extend the AWS data center to your own data center

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What happens when your Spot instance is chosen by AWS for termination?

1) You will get a 10-minute notification sent to your specified email address.
2) You will get a one-hour notification sent via Amazon SNS directly to your EC2 instance on port 65.
3) While it is possible that your Spot Instance is interrupted before the warnings can be made, AWS makes a best effort to provide two-minute Spot Instance interruption notices to the metadata of your EC2 instance(s).
4) Your will get no notification and your host will be terminated without warning.

A

3) While it is possible that your Spot Instance is interrupted before the warnings can be made, AWS makes a best effort to provide two-minute Spot Instance interruption notices to the metadata of your EC2 instance(s).

If your Spot Instance has been marked for termination, a notification will be best-effort posted to the metadata of your EC2 instance two minutes before it is stopped or terminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What service allows you to directly visualize your data in AWS?

1) S3
2) Redshift
3) EMR
4) QuickSight

A

4) QuickSight

QuickSight allows you to create dashboards and visualize your data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Which of the following scenarios are valid use cases for AWS Data Pipeline? (choose 3)

1) Exporting Amazon RDS data to Amazon S3
2) Restarting Amazon EC2 instances
3) Importing and exporting Amazon DynamoDB data
4) Copying CSV files between Amazon S3 buckets
5) Copying CSV data between two on-premises storage devices

A

1) Exporting Amazon RDS data to Amazon S3
3) Importing and exporting Amazon DynamoDB data
4) Copying CSV files between Amazon S3 buckets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

If you need to create a new streaming application requiring Apache Kafka as the primary component, which AWS service would be the best fit for this requirement?

1) Amazon MQ
2) Amazon Managed Streaming for Apache Kafka (MSK)
3) Amazon OpenStreaming Service
4) Amazon Kinesis

A

2) Amazon Managed Streaming for Apache Kafka (MSK)

Amazon MSK is a fully managed service for running data-streaming applications that leverage Apache Kafka.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

What type of database is Redshift?

1) Non-relational
2) Relational
3) NoSQL
4) Unrelational

A

2) Relational

Redshift is a relational database

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

What AWS service allows you to run SQL queries against exabytes of unstructured data in Amazon S3 without needing to load or transform the data?

1) Amazon X-Ray
2) Amazon Redshift Serverless
3) Amazon OpenSearch Service
4) Amazon Redshift Spectrum

A

4) Amazon Redshift Spectrum

Redshift Spectrum allows you to directly run SQL queries against exabytes of unstructured data in Amazon S3. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Parquet, and others. Redshift Spectrum automatically scales query compute capacity based on the data being retrieved, so queries against Amazon S3 run fast, regardless of data set size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What service would you use to create a logging solution involving visualization of log file analytics or BI reports?

1) Amazon Athena
2) Amazon OpenSearch Service (successor to Elasticsearch)
3) Amazon S3
4) Amazon EMR

A

2) Amazon OpenSearch Service (successor to Elasticsearch)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Which AWS service would be best for analyzing large volumes of data, handling complex queries efficiently, delivering fast query performance, and having the ability to scale effectively to support future data growth?

1) Amazon Redshift
2) Amazon S3
3) DynamoDB
4) Amazon RDS

A

1) Amazon Redshift

Redshift would be the best solution for analyzing large volumes of data with complex queries, fast query performance, and scalability. Amazon Redshift is specifically designed for data warehousing and analytics workloads. It provides columnar storage, parallel query execution, and automatic scaling capabilities to handle large datasets and complex queries efficiently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

If you needed to implement a managed ETL service for automating your movement of data between AWS services, which service would best fit your needs?

1) Amazon S3 Event Notifications
2) Amazon ETL
3) AWS Data Pipeline
4) Amazon EventBridge

A

3) AWS Data Pipeline

AWS Data Pipeline is a managed extract, transform, load (ETL) service for automating movement and transformation of your data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Which of the following statements is true about AWS Glue?

1) In AWS Glue, you can specify the number of DPUs (data processing units) you want to allocate to an ETL job.
2) Auto Scaling based on a workload is NOT a serverless feature in AWS Glue.
3) On the Free tier, AWS Glue will store 1,000 objects for free.
4) AWS Glue lets you discover and connect up to 10 different data sources.

A

1) In AWS Glue, you can specify the number of DPUs (data processing units) you want to allocate to an ETL job.

You can specify the number of DPUs for an ETL job. A Glue ETL job must have a minimum of 2 DPUs. AWS Glue allocates 10 DPUs to each ETL job by default.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

You can use _____ to build a schema for your data, and _____ to query the data that’s stored in S3

1) EC2, SQS
2) EC2, Glue
3) Athena, Lambda
4) Glue, Athena

A

4) Glue, Athena

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Which service provides the easiest way to run ad hoc queries across multiple objects in S3 without the need to set up or manage any servers?

1 EMR
2) Glue
3) S3
4) Athena

A

4) Athena

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Which AWS service offers a fully managed way of running search and analytics engines?

1) AWS Athena
2) Amazon Elastic Analytics Service
3) Amazon QuickSight
4) Amazon OpenSearch Service

A

4) Amazon OpenSearch Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

_____ provides real-time streaming of data.

1) Kinesis Data Analytics
2) SQS
3) Kinesis Data Streams
4) Kinesis Data Firehose

A

3) Kinesis Data Streams

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

You would like to have a database that is efficient at performing analytical queries on large sets of columnar data. You would like to connect to this Data Warehouse using a reporting and dashboard tool such as Amazon QuickSight. Which AWS technology do you recommend?

1) Amazon RDS
2) Amazon S3
3) Amazon Redshift
4) Amazon Neptune

A

3) Amazon Redshift

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

You have a lot of log files stored in an S3 bucket that you want to perform a quick analysis, if possible Serverless, to filter the logs and find users that attempted to make an unauthorized action. Which AWS service allows you to do so?

1) Amazon DynamoDB
2) Amazon Redshift
3) S3 Glacier
4) Amazon Athena

A

4) Amazon Athena

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

As a Solutions Architect, you have been instructed you to prepare a disaster recovery plan for a Redshift cluster. What should you do?

1) Enable Multi-AZ
2) Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS region
3) Take a snapshot, then restore to a Redshift Global cluster

A

2) Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Which feature in Redshift forces all COPY and UNLOAD traffic moving between your cluster and data repositories through your VPCs?

1) Enhanced VPC Routing
2) Improved VPC Routing
3) Redshift Spectrum

A

1) Enhanced VPC Routing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

You are running a gaming website that is using DynamoDB as its data store. Users have been asking for a search feature to find other gamers by name, with partial matches if possible. Which AWS technology do you recommend to implement this feature?

1) Amazon DynamoDB
2) Amazon Redshift
3) Amazon OpenSearch Service
4) Amazon Neptune

A

3) Amazon OpenSearch Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

An AWS service allows you to create, run, and monitor ETL (extract, transform, and load) jobs in a few clicks.

1) AWS Glue
2) Amazon Redshift
3) Amazon RDS
4) Amazon DynamoDB

A

1) AWS Glue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

A company is using AWS to host its public websites and internal applications. Those different websites and applications generate a lot of logs and traces. There is a requirement to centrally store those logs and efficiently search and analyze those logs in real-time for detection of any errors and if there is a threat. Which AWS service can help them efficiently store and analyze logs?

1) Amazon S3
2) Amazon OpenSearch service
3) Amazon ElastiCache
4) Amazon OLDB

A

2) Amazon OpenSearch service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

……………………….. makes it easy and cost-effective for data engineers and analysts to run applications built using open source big data frameworks such as Apache Spark, Hive, or Presto without having to operate or manage clusters.

1) AWS Lambda
2) Amazon EMR
3) Amazon Athena
4) Amazon OpenSearch Service

A

2) Amazon EMR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

An e-commerce company has all its historical data such as orders, customers, revenues, and sales for the previous years hosted on a Redshift cluster. There is a requirement to generate some dashboards and reports indicating the revenues from the previous years and the total sales, so it will be easy to define the requirements for the next year. The DevOps team is assigned to find an AWS service that can help define those dashboards and have native integration with Redshift. Which AWS service is best suited?

1) Amazon OpenSearch Service
2) Amazon Athena
3) Amazon QuickSight
4) Amazon EMR

A

3) Amazon QuickSight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Which AWS Glue feature allows you to save and track the data that has already been processed during a previous run of a Glue ETL job?

1) Glue Job Bookmarks
2) Glue Elastic Views
3) Glue Streaming ETL
4) Glue DataBrew

A

1) Glue Job Bookmarks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

You are a DevOps engineer in a machine learning company which 3 TB of JSON files stored in an S3 bucket. There’s a requirement to do some analytics on those files using Amazon Athena and you have been tasked to find a way to convert those files’ format from JSON to Apache Parquet. Which AWS service is best suited?

1) S3 Object Versioning
2) Kinesis Data Streams
3) Amazon MSK
4) AWS Glue

A

4) AWS Glue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

You have an on-premises application that is used together with an on-premises Apache Kafka to receive a stream of clickstream events from multiple websites. You have been tasked to migrate this application as soon as possible without any code changes. You decided to host the application on an EC2 instance. What is the best option you recommend to migrate Apache Kafka?

1) Kinesis Data Streams
2) AWS Glue
3) Amazon MSK
4) Kinesis Data Analytics

A

3) Amazon MSK

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

You have data stored in RDS, S3 buckets and you are using AWS Lake Formation as a data lake to collect, move and catalog data so you can do some analytics. You have a lot of big data and ML engineers in the company and you want to control access to part of the data as it might contain sensitive information. What can you use?

1) Lake Formation Fine-grained Access Control
2) Amazon Cognito
3) AWS Shield
4) S3 Object Lock

A

1) Lake Formation Fine-grained Access Control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Which AWS service is most appropriate when you want to perform real-time analytics on streams of data?

1) Amazon SQS
2) Amazon SNS
3) Amazon Kinesis Data Analytics
4) Amazon Kinesis Data Firehose

A

3) Amazon Kinesis Data Analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

You have multiple Docker-based applications hosted on-premises that you want to migrate to AWS. You don’t want to provision or manage any infrastructure; you just want to run your containers on AWS. Which AWS service should you choose?

1) ECS in EC2 Launch Mode
2) ECR
3) AWS Fargate on ECS

A

3) AWS Fargate on ECS

AWS Fargate allows you to run your containers on AWS without managing any servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Amazon Elastic Container Service (ECS) has two Launch Types: ……………… and ………………

1) Amazon EC2 Launch Type and Fargate Launch Type
2) Amazon EC2 Launch Type and EKS Launch Type
3) Fargate Launch Type and EKS Launch Type

A

1) Amazon EC2 Launch Type and Fargate Launch Type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

You have an application hosted on an ECS Cluster (EC2 Launch Type) where you want your ECS tasks to upload files to an S3 bucket. Which IAM Role for your ECS Tasks should you modify?

1) EC2 Instance Profile
2) ECS Task Role

A

2) ECS Task Role

ECS Task Role is the IAM Role used by the ECS task itself. Use when your container wants to call other AWS services like S3, SQS, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

You’re planning to migrate a WordPress website running on Docker containers from on-premises to AWS. You have decided to run the application in an ECS Cluster, but you want your docker containers to access the same WordPress website content such as website files, images, videos, etc. What do you recommend to achieve this?

1) Mount an EFS volume
2) Mount an EBS volume
3) Use an EC2 Instance Store

A

1) Mount an EFS volume

EFS volume can be shared between different EC2 instances and different ECS Tasks. It can be used as a persistent multi-AZ shared storage for your containers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

You are deploying an application on an ECS Cluster made of EC2 instances. Currently, the cluster is hosting one application that is issuing API calls to DynamoDB successfully. Upon adding a second application, which issues API calls to S3, you are getting authorization issues. What should you do to resolve the problem and ensure proper security?

1) Edit the EC2 instance role to add permissions to S3
2) Create an IAM task role for the new application
3) Enable the Fargate mode
4) Edit the S3 bucket policy to allow the ECS task

A

2) Create an IAM task role for the new application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

You are migrating your on-premises Docker-based applications to Amazon ECS. You were using Docker Hub Container Image Library as your container image repository. Which is an alternative AWS service which is fully integrated with Amazon ECS?

1) AWS Fargate
2) ECR
3) EKS
4) EC2

A

2) ECR

Amazon ECR is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images. ECR is fully integrated with Amazon ECS, allowing easy retrieval of container images from ECR while managing and running containers using ECS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Amazon EKS supports the following node types, EXCEPT ………………..

1) Managed Node Groups
2) Self-Managed Nodes
3) AWS Fargate
4) AWS Lambda

A

4) AWS Lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

A developer has a running website and APIs on his local machine using containers and he wants to deploy both of them on AWS. The developer is new to AWS and doesn’t know much about different AWS services. Which of the following AWS services allows the developer to build and deploy the website and the APIs in the easiest way according to AWS best practices?

1) AWS App Runner
2) EC2 Instances & Application Load Balancer
3) Amazon ECS
4) AWS Fargate

A

1) AWS App Runner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

In Amazon ECS, what is the role of a task definition?

A. To define the EC2 instances that run the application containers.
B. To manage user access and permissions for containerized applications.
C. To provide a blueprint for running Docker containers, including the container image and resource allocation.
D. To balance the load across multiple containers and distribute incoming traffic.

A

C. To provide a blueprint for running Docker containers, including the container image and resource allocation.

In Amazon ECS, a task definition is a blueprint for your application that describes how a container should run, including details like the Docker image, CPU and memory allocations, environment variables, and more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Which service would you use in AWS to orchestrate and manage a cluster of containers using Kubernetes?

A. Amazon ECS
B. Amazon EKS
C. AWS Fargate
D. AWS Lambda

A

B. Amazon EKS

Amazon EKS (Elastic Kubernetes Service) is a managed service that makes it easier to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

For containerized applications requiring persistent storage, which AWS service can be integrated with Amazon EKS to provide dynamic volume provisioning?

A. Amazon EBS
B. Amazon S3
C. AWS CloudFormation
D. Amazon VPC

A

A. Amazon EBS

Amazon EBS (Elastic Block Store) can be used with Amazon EKS to provide persistent block storage for containerized applications. EBS volumes can be dynamically provisioned as part of the EKS deployment process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

A company wants to deploy a new web application in AWS using containers. They need to ensure high availability and load balancing across multiple Availability Zones. Which combination of services would be most appropriate for this requirement?

A. Amazon ECS with AWS Fargate and Amazon Route 53
B. Amazon EKS with EC2 Auto Scaling Groups and AWS Lambda
C. Amazon EC2 with Elastic Load Balancing and Amazon S3
D. AWS Lambda with Amazon API Gateway and Amazon DynamoDB

A

A. Amazon ECS with AWS Fargate and Amazon Route 53

Amazon ECS with AWS Fargate allows for serverless container deployments, and when combined with Elastic Load Balancing and Route 53, it offers high availability across multiple Availability Zones and efficient traffic distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

A media company is processing large video files using a containerized batch processing application. They need to process jobs as they arrive without over-provisioning resources. What is the most cost-effective AWS solution for this scenario?

A. Deploy the application on Amazon EC2 instances managed by EC2 Auto Scaling.
B. Utilize AWS Batch with Spot Instances for processing jobs.
C. Use Amazon EKS with On-Demand EC2 Instances.
D. Implement the application as AWS Lambda functions triggered by Amazon S3 events.

A

B. Utilize AWS Batch with Spot Instances for processing jobs.

AWS Batch efficiently runs batch jobs and, when combined with Spot Instances, can provide a cost-effective solution for processing jobs as they arrive without the need for over-provisioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

An enterprise is running a microservices architecture on AWS using Amazon EKS. They need to ensure that each microservice can scale independently based on demand. Which feature should they implement?

A. EC2 Auto Scaling Groups with custom scaling policies for each microservice.
B. Horizontal Pod Autoscaler in EKS for each microservice deployment.
C. AWS Fargate with scheduled scaling actions.
D. Amazon ECS service autoscaling for each microservice.

A

B. Horizontal Pod Autoscaler in EKS for each microservice deployment.

The Horizontal Pod Autoscaler in Amazon EKS automatically scales the number of pods in a deployment based on observed CPU utilization or other selected metrics, ideal for independently scaling microservices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

A financial services company needs to run a mission-critical application with strict compliance and security requirements. The application must be hosted in a containerized environment. Which setup should they use?

A. Amazon ECS with AWS Fargate running in a private subnet and integration with AWS Key Management Service for encryption.
B. Amazon EC2 instances with Docker, running in public subnets with security groups and NACLs configured for security.
C. AWS Lambda functions for each component of the application, with VPC peering for connectivity to on-premises systems.
D. Amazon EKS with dedicated EC2 instances, running within a private subnet and using IAM roles for secure access to AWS services.

A

D. Amazon EKS with dedicated EC2 instances, running within a private subnet and using IAM roles for secure access to AWS services.

Amazon EKS provides a secure and scalable environment for containerized applications. Using dedicated EC2 instances in a private subnet enhances security, and IAM roles ensure secure access to other AWS services, meeting compliance and security needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

What AWS service can create EC2 instances and place containers in them based on your task definitions?

1) ELB
2) Lambda
3) Docker
4) ECS

A

4) ECS

ECS manages this process for you

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Which of the following are features of Amazon Elastic Container Registry (Amazon ECR)?

Choose 3:
1) Scan on Push
2) Report Personal Identifiable Information (PII) on Push
3) Lifecycle policies
4) Duplicate images
5) Image tag immutability

A

1) Scan on Push
3) Lifecycle policies
5) Image tag immutability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

You expect your new application to have variable reads and writes to the relational database. Which service allows you to test the optimal sizing of your instances while also keeping your budget in mind?

1) Amazon RDS
2) Amazon Aurora Serverless
3) MySQL on EC2
4) DynamoDB

A

2) Amazon Aurora Serverless

Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It is perfect for when you are running workloads that have sudden and unpredictable increases in activity, but you need to plan capacity. Since it is serverless, you also only pay for what you consume.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

How can you easily collect insights regarding requests and responses for your AWS Lambda application?

1) Amazon CloudWatch
2) AWS CloudTrail
3) AWS X-Ray
4) Amazon OpenSearch

A

3) AWS X-Ray

When you see requests and responses, think AWS X-Ray. AWS X-Ray is a service that collects data about requests that your application serves. It provides tools that you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

How can you run your Kubernetes clusters on-premises while easily maintaining AWS best practices?

1) Amazon ECS Anywhere
2) Amazon EKS Anywhere
3) Reference the AWS Well-Architected Framework
4) VMware on AWS

A

2) Amazon EKS Anywhere

Amazon EKS Anywhere provides a means of managing Kubernetes clusters using the same operational excellence and best practices that AWS uses for its Amazon Elastic Kubernetes Service (Amazon EKS). It leverages the EKS Distro (EKS-D) for deploying, using, and managing Kubernetes clusters that run in your data centers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

You have decided to deploy an Amazon Aurora Serverless database. What do you specify to set the scaling limits?

1) Aurora capacity units
2) Aurora scaling units
3) Amazon Aurora Reserved Instances
4) DAX

A

1) Aurora capacity units

These are how the clusters scale. They are based on a certain amount of compute and memory. You can set a minimum and maximum for automatically scaling between the units.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

You need an AWS-managed GraphQL interface for development. Which AWS service would meet this requirement?

1) AWS AppSync
2) Amazon Managed Grafana
3) Amazon Amplify
4) AWS Lambda

A

1) AWS AppSync

AWS AppSync provides a robust, scalable GraphQL interface for application developers to combine data from multiple sources, including Amazon DynamoDB, AWS Lambda, and HTTP APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

What is one thing EC2 instances allow you to configure but a serverless application doesn’t?

1) The ability to pay for the service.
2) VPC placement
3) The ability to configure the service.
4) Operating System

A

4) Operating System

In a serverless application, you don’t have access to the OS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

What is the maximum amount of RAM you can allocate to a single Lambda function?

1) 512MB
2) 10GB
3) 1GB
4) 5GB

A

2) 10GB

Lambda supports up to 10GB of RAM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

What feature of ECS and EKS allows you to run containers without having to manage the underlying hosts?

1) Fargate
2) S3
3) EC2
4) Lambda

A

1) Fargate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Which IAM entity is assigned to a Lambda function to provide it with permissions to access other AWS APIs?

1) Group
2) Role
3) Username and password
4) Secret Key and Access Key

A

2) Role

Roles should be used for Lambda to talk to other AWS APIs. Reference Documentation: AWS Lambda execution role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

Which distribution allows you to leverage Amazon EKS Anywhere?

1) Amazon EKS Library
2) Amazon EKS Distro (EKS-D)
3) Amazon EKS Anywhere is not a real option
4) Amazon EKS Open-Source

A

2) Amazon EKS Distro (EKS-D)

Amazon EKS Distro (EKS-D) is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service (EKS) to create reliable and secure Kubernetes clusters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

You have created a Lambda function that typically will take around 1 hour to process some data. The code works fine when you run it locally on your machine, but when you invoke the Lambda function it fails with a “timeout” error after 3 seconds. What should you do?

1) Configure your Lambda’s timeout to 25 minutes
2) Configure your Lambda’s memory to 10 GB
3) Run your code somewhere else (e.g. EC2 instance)

A

3) Run your code somewhere else (e.g. EC2 instance)

Lambda’s maximum execution time is 15 minutes. You can run your code somewhere else such as an EC2 instance or use Amazon ECS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

Before you create a DynamoDB table, you need to provision the EC2 instance the DynamoDB table will be running on.

True
False

A

False

DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain or operate. It automatically scales tables up and down to adjust for capacity and maintain performance. It provides both provisioned (specify RCU & WCU) and on-demand (pay for what you use) capacity modes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

You have provisioned a DynamoDB table with 10 RCUs and 10 WCUs. A month later you want to increase the RCU to handle more read traffic. What should you do?

1) Increase RCU and keep WCU the same
2) You need to increase both RCU and WCU
3) Increase RCU and decrease WCU

A

1) Increase RCU and keep WCU the same

RCU and WCU are decoupled, so you can increase/decrease each value separately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

You have an e-commerce website where you are using DynamoDB as your database. You are about to enter the Christmas sale and you have a few items which are very popular and you expect that they will be read often. Unfortunately, last year due to the huge traffic you had the ProvisionedThroughputExceededException exception. What would you do to prevent this error from happening again?

1) Increase the RCU to a very high value
2) Create a DAX Cluster
3) Migrate the database away from DynamoDB for the time of the sale

A

2) Create a DAX Cluster

DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x performance improvement. It caches the most frequently used data, thus offloading the heavy reads on hot keys off your DynamoDB table, hence preventing the “ProvisionedThroughputExceededException” exception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

You have developed a mobile application that uses DynamoDB as its datastore. You want to automate sending welcome emails to new users after they sign up. What is the most efficient way to achieve this?

1) Schedule a Lambda function to run every minute using CloudWatch Events, scan the entire table looking for new users
2) Enable SNS and DynamoDB integration
3) Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails

A

3) Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails

DynamoDB Streams allows you to capture a time-ordered sequence of item-level modifications in a DynamoDB table. It’s integrated with AWS Lambda so that you create triggers that automatically respond to events in real-time. There is no such SNS and Dynamo DB integration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

To create a serverless API, you should integrate Amazon API Gateway with ………………….

1) EC2 Instance
2) Elastic Load Balancing
3) AWS Lambda

A

3) AWS Lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

When you are using an Edge-Optimized API Gateway, your API Gateway lives in CloudFront Edge Locations across all AWS Regions.

True
False

A

False

An Edge-Optimized API Gateway is best for geographically distributed clients. API requests are routed to the nearest CloudFront Edge Location which improves latency. The API Gateway still lives in one AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

You are running an application in production that is leveraging DynamoDB as its datastore and is experiencing smooth sustained usage. There is a need to make the application run in development mode as well, where it will experience the unpredictable volume of requests. What is the most cost-effective solution that you recommend?

1) Use Provisioned Capacity Mode with AutoScaling enabled for both development and production
2) Use Provisioned Capacity Mode with AutoScaling enabled for production and On-Demand Capacity Mode for development
3) Use Provisioned Capacity Mode with AutoScaling enabled for development and On-Demand Capacity Mode for production
4) Use On-Demand Capacity Mode for both development and production

A

2) Use Provisioned Capacity Mode with AutoScaling enabled for production and On-Demand Capacity Mode for development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

You have an application that is served globally using CloudFront Distribution. You want to authenticate users at the CloudFront Edge Locations instead of authentication requests go all the way to your origins. What should you use to satisfy this requirement?

1) Lambda@Edge
2) API Gateway
3) DynamoDB
4) AWS Global Accelerator

A

1) Lambda@Edge

Lambda@Edge is a feature of CloudFront that lets you run code closer to your users, which improves performance and reduces latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

The maximum size of an item in a DynamoDB table is ……………….

1) 1 MB
2) 500 KB
3) 400 KB
4) 400 MB

A

3) 400 KB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

Which AWS service allows you to build Serverless workflows using AWS services (e.g., Lambda) and supports human approval?

1) AWS Lambda
2) Amazon EC2
3) AWS Step Functions
4) AWS Storage Gateway

A

3) AWS Step Functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

A company has a serverless application on AWS which consists of Lambda, DynamoDB, and Step Functions. In the last month, there are an increase in the number of requests against the application which results in an increase in DynamoDB costs, and requests started to be throttled. After further investigation, it shows that the majority of requests are read requests against some queries in the DynamoDB table. What do you recommend to prevent throttles and reduce costs efficiently?

1) Use an EC2 instance with Redis installed and place it between the Lambda function and DynamoDB table
2) Migrate from DynamoDB to Aurora and use ElastiCache to cache the most requested data
3) Migrate from Dynamo DB to S3 and use CloudFront to cache the most requested data
4) Use DynamoDB Accelerator (DAX) to cache the most requested data

A

4) Use DynamoDB Accelerator (DAX) to cache the most requested data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

You are a DevOps engineer in a football company that has a website that is backed by a DynamoDB table. The table stores viewers’ feedback for football matches. You have been tasked to work with the analytics team to generate reports on the viewers’ feedback. The analytics team wants the data in DynamoDB in json format and hosted in an S3 bucket to start working on it and create the reports. What is the best and most cost-effective way to convert DynamoDB data to json files?

1) Select DynamoDB table then select Export to S3
2) Create a Lambda function to read DynamoDB data, convert them to JSON files, then store files in S3 bucket
3) Use AWS Transfer Family
4) Use AWS DataSync

A

1) Select DynamoDB table then select Export to S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

A website is currently in the development process and it is going to be hosted on AWS. There is a requirement to store user sessions for users logged in to the website with an automatic expiry and deletion of expired user sessions. Which of the following AWS services are best suited for this use case?

1) Store users’ sessions in an S3 bucket and enable S3 Lifecycle Policy
2) Store users’ sessions locally in an EC2 instance
3) Store users’ sessions in a DynamoDB table and enable TTL
4) Store users’ sessions in an EFS file system

A

3) Store users’ sessions in a DynamoDB table and enable TTL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

You have a mobile application and would like to give your users access to their own personal space in the S3 bucket. How do you achieve that?

1) Generate IAM user credentials for each of your application’s users
2) Use Amazon Cognito Identity Federation
3) Use SAML Identity Federation
4) Use a Bucket Policy to make your bucket public

A

2) Use Amazon Cognito Identity Federation

Amazon Cognito can be used to federate mobile user accounts and provide them with their own IAM permissions, so they can be able to access their own personal space in the S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

You are developing a new web and mobile application that will be hosted on AWS and currently, you are working on developing the login and signup page. The application backend is serverless and you are using Lambda, DynamoDB, and API Gateway. Which of the following is the best and easiest approach to configure the authentication for your backend?

1) Store users’ credentials in a DynamoDB table encrypted using KMS
2) Store users’ credentials in an S3 bucket encrypted using KMS
3) Use Cognito User Pools
4) Store users’ credentials in AWS Secrets Manager

A

3) Use Cognito User Pools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
141
Q

You are running a mobile application where you want each registered user to upload/download images to/from his own folder in the S3 bucket. Also, you want to give your users to sign-up and sign in using their social media accounts (e.g., Facebook). Which AWS service should you choose?

1) AWS IAM
2) AWS IAM Identity Center
3) Amazon Cognito
4) Amazon CloudFront

A

3) Amazon Cognito

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
142
Q

A startup company plans to run its application on AWS. As a solutions architect, the company hired you to design and implement a fully Serverless REST API. Which technology stack do you recommend?

1) API Gateway + AWS Lambda
2) Application Load Balancer + EC2
3) ECS + EBS
4) Amazon CloudFront + S3

A

1) API Gateway + AWS Lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
143
Q

The following AWS services have an out of the box caching feature, EXCEPT ……………..

1) API Gateway
2) Lambda
3) DynamoDB

A

2) Lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
144
Q

You have a lot of static files stored in an S3 bucket that you want to distribute globally to your users. Which AWS service should you use?

1) S3 Cross-Region Replication
2) Amazon CloudFront
3) Amazon Route 53
4) API Gateway

A

2) Amazon CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. This is a perfect use case for Amazon CloudFront.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
145
Q

You have created a DynamoDB table in ap-northeast-1 and would like to make it available in eu-west-1, so you decided to create a DynamoDB Global Table. What needs to be enabled first before you create a DynamoDB Global Table?

1) Dynamo Streams
2) DynamoDB DAX
3) DynamoDB Versioning
4) DynamoDB Backups

A

1) Dynamo Streams

DynamoDB Streams enable DynamoDB to get a changelog and use that changelog to replicate data across replica tables in other AWS Regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
146
Q

You have configured a Lambda function to run each time an item is added to a DynamoDB table using DynamoDB Streams. The function is meant to insert messages into the SQS queue for further long processing jobs. Each time the Lambda function is invoked, it seems able to read from the DynamoDB Stream but it isn’t able to insert the messages into the SQS queue. What do you think the problem is?

1) Lambda can’t be used to insert messages into the SQS queue, use an EC2 instance instead
2) The Lambda Execution IAM Role is missing permissions
3) The Lambda security group must allow outbound access to SQS
4) The SQS security group must be edited to allow AWS Lambda

A

2) The Lambda Execution IAM Role is missing permissions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
147
Q

You would like to create an architecture for a micro-services application whose sole purpose is to encode videos stored in an S3 bucket and store the encoded videos back into an S3 bucket. You would like to make this micro-services application reliable and has the ability to retry upon failures. Each video may take over 25 minutes to be processed. The services used in the architecture should be asynchronous and should have the capability to be stopped for a day and resume the next day from the videos that haven’t been encoded yet. Which of the following AWS services would you recommend in this scenario?

1) Amazon S3 + AWS Lambda
2) Amazon SNS + Amazon EC2
3) Amazon SQS + Amazon EC2
4) Amazon SQS + AWS Lamda

A

3) Amazon SQS + Amazon EC2

Amazon SQS allows you to retain messages for days and process them later, while we can take down our EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
148
Q

You are running a photo-sharing website where your images are downloaded from all over the world. Every month you publish a master pack of beautiful mountain images that are over 15 GB in size. The content is currently hosted on an Elastic File System (EFS) file system and distributed by an Application Load Balancer and a set of EC2 instances. Each month, you are experiencing very high traffic which increases the load on your EC2 instances and increases network costs. What do you recommend to reduce EC2 load and network costs without refactoring your website?

1) Hosts the master pack into S3
2) Enable Application Load Balancer Caching
3) Scale up the EC2 instances
4) Create a CloudFront Distribution

A

4) Create a CloudFront Distribution

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. Amazon CloudFront can be used in front of an Application Load Balancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
149
Q

An AWS service allows you to capture gigabytes of data per second in real-time and deliver these data to multiple consuming applications, with a replay feature.

1) Kinesis Data Streams
2) Amazon S3
3) Amazon MQ

A

1) Kinesis Data Streams

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. It can continuously capture gigabytes of data per second from hundreds of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
150
Q

Assume the role of AWS Solutions Architect. Using AWS Auto Scaling, your organization has a well-functioning online application. Customers from all around the world are becoming interested in using the app. However, this has a negative influence on the application’s performance. Your boss wants to know how you can increase the application’s performance and availability. Which of the following Amazon Web Services (AWS) offerings would you suggest?

1) Amazon Web Services DataSync
2) Amazon DynamoDB Accelerator
3) Lake Formation in the AWS System
4) AWS Global Accelerator Program

A

4) AWS Global Accelerator Program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
151
Q

You’re working on an HPC application with your team. Luster’s high-performance and low-latency file system is required to address complicated, computationally difficult issues. You must set up this file system on AWS at a cheap cost. What’s the best way to do this?
1) Amazon FSx-created Luster file system may be used to store data.
2) Use Amazon EBS to set up a high-performance Cluster file system.
3) EC2 placement group to create a high-speed volume cluster.
4) Start Luster from the AWS Marketplace.

A

1) Amazon FSx-created Luster file system may be used to store data.

Customers that utilize Amazon FSx’s Luster file system will only be charged for what they really use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
152
Q

Your website is hosted in an S3 bucket and you have customers from across the world. Caching frequently visited material in an AWS service will minimize latency and boost data transfer speeds. Your decision must be based on one of the following possibilities.

1) Use AWS SDKs to make concurrent queries to Amazon S3 service endpoints horizontally scalable.
2) Create numerous Amazon S3 buckets in the same AWS Region.
3) To better serve customers around the globe, you may enable Cross-Region Replication to several AWS Regions.
4) Set up CloudFront to distribute the S3 bucket’s content.

A

4) Set up CloudFront to distribute the S3 bucket’s content.

CloudFront is able to cache frequently requested material, resulting in improved speed. The speed of other solutions may be improved, but they do not keep cache for S3 items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
153
Q

There is an Auto Scaling group for your company’s online game. The app’s traffic is well-known in advance. There is a noticeable rise in traffic on Fridays, which lasts over the weekend, and then begins to decrease on Mondays. The Auto Scalability group’s scaling has to be planned. Which approach is best for implementing a scalability policy?

1) The first step is to create a scheduled CloudWatch event rule that launches and terminates instances every week.
2) The ASG will automatically scale if a target tracking scalping strategy based on the average CPU measure is set.
3) Using the ASG’s Automatic Scaling tab, implement a step scaling policy to automatically scale-out/in at a defined time every week.
4) Create a planned activity in the Auto Scaling group and define the frequency, start and end times as well as the capacity of the action.

A

4) Create a planned activity in the Auto Scaling group and define the frequency, start and end times as well as the capacity of the action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
154
Q

AWS EC2 must be used for the deployment of a machine learning application. The application relies heavily on the speed of inter-instance communication, thus you’ve decided to add a network device to the instance in order to boost that speed. What’s the best alternative for increasing output?

1) Assertively, make use of the EC2’s increased networking capabilities.
2) In the instance, configure the Elastic Fabric Adapter (EFA).
3) Assemble an ENI in the instance with high throughput.
4) An Elastic File System (EFS) is created and mounted in a virtual machine (VM).

A

2) In the instance, configure the Elastic Fabric Adapter (EFA).

EFA is the most suited strategy for boosting High-Performance Computing (HPC) and machine learning applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
155
Q

You’re building many EC2 instances for a new app. The EC2 instances must have both low network latency and high network throughput if the application is to function well. A single availability zone should be used for all instances to be deployed. Exactly how would you set this up?

1) Use the Cluster placement technique to start all of the EC2 instances in a placement group.
2) When EC2 instances are launched, automatically assign a public IP address to each of the running instances.
3) Using the Spread placement method, you may start up EC2 instances in an EC2 placement group.
4) The EC2 instances should be launched using an instance type that provides increased networking capabilities wherever possible.

A

1) Use the Cluster placement technique to start all of the EC2 instances in a placement group.

The Cluster placement technique may increase EC2 instance network performance. When setting up a placement group, you may choose a strategy. When establishing a placement group, you may choose the approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
156
Q

You have an S3 bucket where clients may upload images. When an item is uploaded, an event notification containing the object information is delivered to an SQS queue. You also have an ECS cluster that receives messages from the queue and processes them in batches. Depending on the volume of incoming messages and the pace with which the backend processes them, the queue size might fluctuate dramatically. Which measure would you use to increase or decrease the capacity of the ECS cluster?
1) The size of the SQS queue in terms of messages.
2) The ECS cluster’s memory utilization.
3) The total number of items in the S3 bucket.
4) The ECS cluster’s container count.

A

1) The size of the SQS queue in terms of messages.

Users may set up a CloudWatch alert depending on the number of messages in the SQS queue and use the alarm to tell the ECS cluster to scale up or down.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
157
Q

If you have an existing VPC built, you need to route all traffic from your VPC to AWS S3 buckets across the AWS internal network. S3 bucket traffic is now allowed on the virtual private network (VPC) endpoint that they’ve set up for S3. As part of the application you’re building, you’ll be using VPC to deliver traffic to an AWS S3 bucket. After creating a routing table, you added a route to the VPC endpoint and linked it to your new subnet’s route table. As a result, when you use the AWS CLI to submit an S3 bucket request from EC2, you receive an error message of 403 access forbidden. What may be the problem?

1) Your VPC is located in a separate region from the AWS S3 bucket.
2) Traffic to the S3 prefix list is blocked by EC2 security group outbound rules.
3) S3 bucket may not be available at the VPC endpoint because of a restrictive policy.
4) EC2 instances are not listed as the origin in the S3 bucket’s CORS setup.

A

3) S3 bucket may not be available at the VPC endpoint because of a restrictive policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
158
Q

You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?

1) Use CloudFront Geo Restriction
2) Use Origin Access Control
3) Set up a security group and attach it to your CloudFront Distribution
4) Use a Route 53 Latency record and attach it to CloudFront

A

1) Use CloudFront Geo Restriction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
159
Q

You have a static website hosted on an S3 bucket. You have created a CloudFront Distribution that points to your S3 bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that?

1) Send an email to your clients and tell them not to use the S3 enpoint
2) Configure your CloudFront Distribution and create an Origin Access Control (OAC), then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution
3) Use S3 Access Points to redirect clients to CloudFront

A

2) Configure your CloudFront Distribution and create an Origin Access Control (OAC), then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
160
Q

What does this S3 bucket policy do?

{

    "Version": "2012-10-17",

    "Id": "Mystery policy",

    "Statement": [{

        "Sid": "What could it be?",

        "Effect": "Allow",

        "Principal": {

           "Service": "cloudfront.amazonaws.com"

        },

        "Action": "s3:GetObject",

        "Resource": "arn:aws:s3:::examplebucket/*",
        "Condition": {
            "StringEquals": {
                "AWS:SourceArn": "arn:aws:cloudfront::123456789012:distribution/EDFDVBD6EXAMPLE"
            }
        }

    }]

}

1) Forces GetObject request to be encrypted if coming from CloudFront
2) Only allows the S3 bucket content to be accessed from your CloudFront Distribution
3) Only allows GetObject type of request on the S3 bucket from anybody

A

2) Only allows the S3 bucket content to be accessed from your CloudFront Distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
161
Q

A WordPress website is hosted in a set of EC2 instances in an EC2 Auto Scaling Group and fronted by a CloudFront Distribution which is configured to cache the content for 3 days. You have released a new version of the website and want to release it immediately to production without waiting for 3 days for the cached content to be expired. What is the easiest and most efficient way to solve this?

1) Open a support ticket with AWS Support to remove the CloudFront Cache
2) CloudFront Cache Invalidation
3) EC2 Cache Invalidation

A

2) CloudFront Cache Invalidation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
162
Q

A company is deploying a media-sharing website to AWS. They are going to use CloudFront to deliver the content with low latency to their customers where they are located in both US and Europe only. After a while there a huge costs for CloudFront. Which CloudFront feature allows you to decrease costs by targeting only US and Europe?

1) CloudFront Cache Invalidation
2) CloudFront Price Classes
3) CloudFront Cache Behavior
4) Origin Access Control

A

2) CloudFront Price Classes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
163
Q

A company is migrating a web application to AWS Cloud and they are going to use a set of EC2 instances in an EC2 Auto Scaling Group. The web application is made of multiple components so they will need a host-based routing feature to route to specific web application components. This web application is used by many customers and therefore the web application must have a static IP address so it can be whitelisted by the customers’ firewalls. As the customers are distributed around the world, the web application must also provide low latency to all customers. Which AWS service can help you to assign a static IP address and provide low latency across the globe?

1) AWS Global Accelerator + Application Load Balancer
2) Amazon CloudFront
3) Network Load Balancer
4) Application Load Balancer

A

1) AWS Global Accelerator + Application Load Balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
164
Q

What is the minimum length of time before you can schedule a KMS key to be deleted?

1) 30 days
2) 7 days
3) 1 day
4) There is no waiting period

A

2) 7 days

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
165
Q

Which AWS service supports automatic rotation of RDS security credentials?

1) S3
2) DynamoDB
3) Parameter Store
4) Secrets Manager

A

4) Secrets Manager

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
166
Q

What would you use Amazon Cognito for?

1) To deploy physical firewall protection across your VPCs via its managed infrastructure (e.g., a physical firewall that is managed by AWS).
2) To provide authentication, authorization, and user management for your web and mobile apps without the need for custom code.
3) To view all your security alerts from services like Amazon GuardDuty, Amazon Inspector, Amazon Macie, and AWS Firewall Manager.
4) To get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements.

A

2) To provide authentication, authorization, and user management for your web and mobile apps without the need for custom code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
167
Q

Which of the following is NOT a data source for GuardDuty?

1) CloudTrail logs
2) DNS query logs
3) RDS event history
4) VPC Flow Logs

A

3) RDS event history

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
168
Q

Which Layers does WAF provide protection on?

1) All Layers
2) Layers 3 and 4
3) Layers 3, 4, and 7
4) Layer 7

A

4) Layer 7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
169
Q

What is the best way to deliver content from an S3 bucket that only allows users to view content for a set period of time?

1) Set a bucket policy to open up the content you need to share.
2) Create a public copy of your data in another S3 bucket.
3) Replicate the S3 data to the requested user’s S3 bucket.
4) Create a presigned URL using S3.

A

4) Create a presigned URL using S3.

Presigned URLs would allow you to restrict the length of time the content can be viewed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
170
Q

You need a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements. Which service should you use?

1) AWS Artifact
2) AWS Audit Manager
3) Amazon Cognito
4) Amazon Detective

A

1) AWS Artifact

Artifact is a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
171
Q

Your boss requires automatic key rotation for your encrypted data. Which AWS service supports this?

1) EBS
2) KMS
3) SQS
4) EC2

A

2) KMS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
171
Q

To enable In-flight Encryption (In-Transit Encryption), we need to have ……………………

1) an HTTP endpoint with an SSL certificate
2) an HTTPS endpoint with an SSL certificate
3) a TCP endpoint

A

2) an HTTPS endpoint with an SSL certificate

In-flight Encryption = HTTPS, and HTTPS can not be enabled without an SSL certificate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
171
Q

Server-Side Encryption means that the data is sent encrypted to the server.

True
False

A

False

Server-Side Encryption means the server will encrypt the data for us. We don’t need to encrypt it beforehand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
172
Q

In Server-Side Encryption, where do the encryption and decryption happen?

1) Both Encryption and Decryption happen on the server
2) Both Encryption and Decryption happen on the client
3) Encryption happens on the server and Decryption happens on the client
4) Encryption happens on the client and Decryption happens on the server

A

1) Both Encryption and Decryption happen on the server

In Server-Side Encryption, we can’t do encryption/decryption ourselves as we don’t have access to the corresponding encryption key.

173
Q

In Client-Side Encryption, the server must know our encryption scheme before we can upload the data.

True
False

A

False

With Client-Side Encryption, the server doesn’t need to know any information about the encryption scheme being used, as the server will not perform any encryption or decryption operations.

174
Q

You need to create KMS Keys in AWS KMS before you are able to use the encryption features for EBS, S3, RDS …

True
False

A

False

You can use the AWS Managed Service keys in KMS, therefore we don’t need to create our own KMS keys.

You could also create your own keys for AWS to do the encryption, but it’s not mandatory.

175
Q

AWS KMS supports both symmetric and asymmetric KMS keys.

True
False

A

True

KMS keys can be symmetric or asymmetric. A symmetric KMS key represents a 256-bit key used for encryption and decryption. An asymmetric KMS key represents an RSA key pair used for encryption and decryption or signing and verification, but not both. Or it represents an elliptic curve (ECC) key pair used for signing and verification.

176
Q

When you enable Automatic Rotation on your KMS Key, the backing key is rotated every ……………..

1) 90 days
2) 1 year
3) 2 years
4) 3 years

A

2) 1 year

177
Q

You have an AMI that has an encrypted EBS snapshot using KMS CMK. You want to share this AMI with another AWS account. You have shared the AMI with the desired AWS account, but the other AWS account still can’t use it. How would you solve this problem?

1) The other AWS account needs to logout and login again to refresh its credentials
2) You need to share the KMS CMK used to encrypt the AMI with the other AWS account
3) You can’t share an AMI that has an encrypted EBS snapshot

A

2) You need to share the KMS CMK used to encrypt the AMI with the other AWS account

178
Q

You have created a Customer-managed CMK in KMS that you use to encrypt both S3 buckets and EBS snapshots. Your company policy mandates that your encryption keys be rotated every 3 months. What should you do?

1) Re-configure your KMS CMK and enable Automatic Rotation, in the “Period” select 3 months
2) Use AWS Managed Keys as they are automatically rotated by AWS every 3 months
3) Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data

A

3) Rotate the KMS CMK manually. Create a new KMS CMK and use Key Aliases to reference the new KMS CMK. Keep the old KMS CMK so you can decrypt the old data

179
Q

What should you use to control access to your KMS CMKs?

1) KMS Key Policies
2) KMS IAM Policy
3) AWS GuardDuty
4) KMS Access Control List (KMS ACL)

A

1) KMS Key Policies

180
Q

You have a Lambda function used to process some data in the database. You would like to give your Lambda function access to the database password. Which of the following options is the most secure?

1) Embed it in the code
2) Have it as a plaintext env variable
3) Have it as an encrypted env variable and decrypt it at runtime

A

3) Have it as an encrypted env variable and decrypt it at runtime

181
Q

You have a secret value that you use for encryption purposes, and you want to store and track the values of this secret over time. Which AWS service should you use?

1) AWS KMS Versioning feature
2) SSM Parameter Store
3) Amazon S3

A

2) SSM Parameter Store

SSM Parameters Store can be used to store secrets and has built-in version tracking capability. Each time you edit the value of a parameter, SSM Parameter Store creates a new version of the parameter and retains the previous versions. You can view the details, including the values, of all versions in a parameter’s history.

182
Q

Your user-facing website is a high-risk target for DDoS attacks and you would like to get 24/7 support in case they happen and AWS bill reimbursement for the incurred costs during the attack. What AWS service should you use?

1) AWS WAF
2) AWS Shield Advanced
3) AWS Shield
4) AWS DDoS OpsTeam

A

2) AWS Shield Advanced

183
Q

You would like to externally maintain the configuration values of your main database, to be picked up at runtime by your application. What’s the best place to store them to maintain control and version history?

1) Amazon DynamoDB
2) Amazon S3
3) Amazon EBS
4) SSM Parameter Store

A

4) SSM Parameter Store

184
Q

AWS GuardDuty scans the following data sources, EXCEPT …………….

1) CloudTrail Logs
2) VPC Flow Logs
3) DNS Logs
4) CloudWatch Logs

A

4) CloudWatch Logs

185
Q

You have a website hosted on a fleet of EC2 instances fronted by an Application Load Balancer. What should you use to protect your website from common web application attacks (e.g., SQL Injection)?

1) AWS Shield
2) AWS WAF
3) AWS Security Hub
4) AWS GuardDuty

A

2) AWS WAF

186
Q

You would like to analyze OS vulnerabilities from within EC2 instances. You need these analyses to occur weekly and provide you with concrete recommendations in case vulnerabilities are found. Which AWS service should you use?

1) AWS Shield
2) Amazon GuardDuty
3) Amazon Inspector
4) AWS Config

A

3) Amazon Inspector

187
Q

What is the most suitable AWS service for storing RDS DB passwords which also provides you automatic rotation?

1) AWS Secrets Manager
2) AWS KMS
3) AWS SSM Parameter Store

A

1) AWS Secrets Manager

188
Q

Which AWS service allows you to centrally manage EC2 Security Groups and AWS Shield Advanced across all AWS accounts in your AWS Organization?

1) AWS Shield
2) AWS GuardDuty
3) AWS Config
4) AWS Firewall Manager

A

4) AWS Firewall Manager

AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. It is integrated with AWS Organizations so you can enable AWS WAF rules, AWS Shield Advanced protection, security groups, AWS Network Firewall rules, and Amazon Route 53 Resolver DNS Firewall rules.

189
Q

Which AWS service helps you protect your sensitive data stored in S3 buckets?

1) Amazon GuardDuty
2) Amazon Shield
3) Amazon Macie
4) AWS KMS

A

3) Amazon Macie

Amazon Macie is a fully managed data security service that uses Machine Learning to discover and protect your sensitive data stored in S3 buckets. It automatically provides an inventory of S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with other AWS accounts. It allows you to identify and alert you to sensitive data, such as Personally Identifiable Information (PII).

190
Q

An online-payment company is using AWS to host its infrastructure. The frontend is created using VueJS and is hosted on an S3 bucket and the backend is developed using PHP and is hosted on EC2 instances in an Auto Scaling Group. As their customers are worldwide, they use both CloudFront and Aurora Global database to implement multi-region deployments to provide the lowest latency and provide availability, and resiliency. A new feature required which gives customers the ability to store data encrypted on the database and this data must not be disclosed even by the company admins. The data should be encrypted on the client side and stored in an encrypted format. What do you recommend to implement this?

1) Using Aurora Client-side Encryption and KMS Multi-region Keys
2) Using Lambda Client-side Encryption and KMS Multi-region Keys
3) Using Aurora Client-side Encryption and CloudHSM
4) Using Lambda Client-side Encryption and CloudHSM

A

1) Using Aurora Client-side Encryption and KMS Multi-region Keys

191
Q

You have an S3 bucket that is encrypted with SSE-KMS. You have been tasked to replicate the objects to a target bucket in the same AWS region but with a different KMS Key. You have configured the S3 replication, the target bucket, and the target KMS key and it is still not working. What is missing to make the S3 replication work?

1) This is not a supported feature
2) You have to raise a support ticket for AWS to start this replication process for you
3) You have to configure permissions for both Source KMS Key kms:Decrypt and Target KMS Key kms:Encrypt to be used by the S3 Replication Service
4) The source KMS Key and Target KMS key must be the same

A

3) You have to configure permissions for both Source KMS Key kms:Decrypt and Target KMS Key kms:Encrypt to be used by the S3 Replication Service

192
Q

You have generated a public certificate using LetsEncrypt and uploaded it to the ACM so you can use and attach to an Application Load Balancer that forwards traffic to EC2 instances. As this certificate is generated outside of AWS, it does not support the automatic renewal feature. How would you be notified 30 days before this certificate expires so you can manually generate a new one?

1) Configure ACM to send notifications by linking it to 3rd party certificate provider LetsEncrypt
2) Configure EventBridge for Daily Expiration Events from ACM to invoke SNS notifications to your email
3) Configure EventBridge for Monthly Expiration Events from ACM to invoke SNS notifications to your email
4) Configure CloudWatch Alarms for Daily Expiration Events from aCM to invoke SNS notifications to your email

A

2) Configure EventBridge for Daily Expiration Events from ACM to invoke SNS notifications to your email

193
Q

You have created the main Edge-Optimized API Gateway in us-west-2 AWS region. This main Edge-Optimized API Gateway forwards traffic to the second level API Gateway in ap-southeast-1. You want to secure the main API Gateway by attaching an ACM certificate to it. Which AWS region are you going to create the ACM certificate in?

1) us-east-1
2) us-west-2
3) ap-southeast-1
4) Both us-east-1 and us-west-2 works

A

1) us-east-1

As the Edge-Optimized API Gateway is using a custom AWS managed CloudFront distribution behind the scene to route requests across the globe through CloudFront Edge locations, the ACM certificate must be created in us-east-1.

194
Q

You are managing an AWS Organization with multiple AWS accounts. Each account has a separate application with different resources. You want an easy way to manage Security Groups and WAF Rules across those accounts as there was a security incident the last week and you want to tighten up your resources. Which AWS service can help you to do so?

1) AWS Guard Duty
2) Amazon Shield
3) Amazon Inspector
4) AWS Firewall Manager

A

4) AWS Firewall Manager

195
Q

Which AWS service allows you to model and set up your AWS resources so you can spend less time managing those resources and more time focusing on your applications that run in AWS?

A. AWS Lambda
B. AWS CloudFormation
C. Amazon EC2
D. AWS Elastic Beanstalk

A

B. AWS CloudFormation

AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment

CloudFormation is an Infrastructure as Code (IaC) service that allows you to model, provision, and manage AWS and third-party resources by writing templates.

196
Q

What is AWS SES primarily used for?

A. Monitoring and logging AWS resource usage.
B. Sending bulk email and transactional email.
C. Streaming data in real-time.
D. Managing user identities and access.

A

B. Sending bulk email and transactional email

AWS SES is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application

197
Q

AWS Pinpoint is primarily used for which purpose?

A. Cost management and optimization.
B. Running batch jobs at scale.
C. Engaging with customers through email, SMS, push notifications, and campaigns.
D. Automating software deployments.

A

C. Engaging with customers through email, SMS, push notifications, and campaigns.

AWS Pinpoint is used for customer engagement through various channels like email, SMS, and push notifications, helping to drive user engagement through targeted communication.

198
Q

What is a key feature of AWS Systems Manager?

A. It is used for email sending and receiving.
B. It automates software deployments.
C. It helps you manage your AWS resources.
D. It is a data warehousing service.

A

C. It helps you manage your AWS resources.

AWS Systems Manager gives you visibility and control of your infrastructure on AWS. It provides a unified user interface that allows you to view operational data from multiple AWS services and automate operational tasks across your AWS resources.

199
Q

Which AWS service provides an interface that allows you to visualize, understand, and manage your AWS costs and usage over time?

A. AWS Budgets
B. AWS Cost Explorer
C. AWS Trusted Advisor
D. AWS Pricing Calculator

A

B. AWS Cost Explorer

AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time.

200
Q

What is AWS Batch primarily used for?

A. Running complex big data analytics.
B. Managing user identities and federating access.
C. Sending and receiving email communications.
D. Efficiently running hundreds to thousands of batch computing jobs on AWS.

A

D. Efficiently running hundreds to thousands of batch computing jobs on AWS.

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds to thousands of batch computing jobs on AWS.

201
Q

What is the primary use of AWS AppFlow?

A. It’s used for building mobile and web applications.
B. It provides a secure, scalable, and cost-effective repository for data.
C. It securely transfers data between SaaS applications and AWS services.
D. It is a fully managed service for orchestrating workflows.

A

C. It securely transfers data between SaaS applications and AWS services.

AWS AppFlow is a fully managed integration service that enables you to securely transfer data between SaaS applications like Salesforce, ServiceNow, and AWS services like Amazon S3 and Redshift.

202
Q

AWS Amplify is best described as a tool for what purpose?

A. Managing cloud costs and usage.
B. Simplifying the deployment of batch jobs.
C. Building, deploying, and hosting mobile and web applications.
D. Automating network configurations.

A

C. Building, deploying, and hosting mobile and web applications.

AWS Amplify is a set of tools and services that can be used together or on their own to help front-end web and mobile developers build scalable full-stack applications, powered by AWS.

202
Q

What is the difference between Elastic Beanstalk, CloudFormation, and AWS Amplify?

A

Elastic Beanstalk: Best for developers who want to deploy applications quickly without managing the underlying infrastructure. It’s like renting an apartment - you control what’s inside, but not the building itself.
CloudFormation: Ideal for infrastructure engineers and DevOps who need to codify and automate the setup of their AWS infrastructure. It’s like building your own house - complete control over the construction.
Amplify: Geared towards frontend and mobile developers who want an integrated environment for both backend services and frontend development. It’s like having a toolkit to both design your house and manage the utilities and services it needs.

203
Q

Which AWS service allows you to centrally manage access to multiple AWS accounts and resources through a single AWS account?

A) AWS Identity and Access Management (IAM)
B) AWS Organizations
C) AWS Security Hub
D) AWS Config

A

B) AWS Organizations

AWS Organizations is a service that allows you to centrally manage and govern multiple AWS accounts. It helps you consolidate billing, implement security policies, and manage access control across your AWS resources from a single AWS account.

204
Q

Which AWS service provides a way to implement multi-factor authentication (MFA) for AWS accounts?

A) AWS Identity and Access Management (IAM)
B) AWS Key Management Service (KMS)
C) AWS Single Sign-On (SSO)
D) AWS Cognito

A

A) AWS Identity and Access Management (IAM)

AWS IAM provides the capability to enable multi-factor authentication (MFA) for AWS accounts. MFA adds an extra layer of security by requiring users to provide an additional authentication factor, such as a one-time password generated by a virtual or hardware MFA device, in addition to their regular username and password.

205
Q

You are designing a highly secure network architecture on AWS. Which security feature should you use to control traffic at the subnet level?

A) Network Access Control Lists (ACLs)
B) Security Groups
C) Web Application Firewall (WAF)
D) AWS Shield

A

A) Network Access Control Lists (ACLs)

Network Access Control Lists (ACLs) allow you to control inbound and outbound traffic at the subnet level. ACLs act as a stateless firewall, enabling you to create rules that define what type of traffic is allowed or denied between subnets in your VPC. They provide an additional layer of network security alongside security groups.

206
Q

You have a web application deployed on AWS that requires encryption of data at rest. Which AWS service would you use to achieve this?

A) AWS Key Management Service (KMS)
B) AWS Secrets Manager
C) AWS Certificate Manager (ACM)
D) AWS CloudHSM

A

A) AWS Key Management Service (KMS)

AWS Key Management Service (KMS) is a managed service that allows you to create and control the encryption keys used to encrypt your data. By integrating KMS with other AWS services such as Amazon S3, Amazon EBS, or Amazon RDS, you can easily encrypt your data at rest. KMS provides a secure and scalable solution for key management.

207
Q

In a scenario where you want to grant temporary access to an AWS resource to an external user without creating an AWS account for them, which option would you choose?

A) AWS Security Token Service (STS)
B) AWS Single Sign-On (SSO)
C) AWS Identity and Access Management (IAM) Roles
D) AWS Cognito

A

A) AWS Security Token Service (STS)

AWS Security Token Service (STS) enables you to provide temporary, limited-privilege credentials to external users without the need for them to have their own AWS account. STS issues temporary security tokens that can be used to access AWS resources for a specific duration, making it suitable for scenarios where temporary access is required, such as cross-account access or federated access.

208
Q

Which AWS service provides a fully managed, serverless computing platform for running containerized applications?

A) Amazon Elastic Container Registry (ECR)
B) Amazon Elastic Kubernetes Service (EKS)
C) AWS Fargate
D) AWS Batch

A

C) AWS Fargate

AWS Fargate is a serverless compute engine for containers that allows you to run containers without managing the underlying infrastructure. It provides a fully managed platform to deploy and run containerized applications, abstracting away the need to provision and manage servers, making it a secure and scalable option for running workloads.

209
Q

You are designing a highly available architecture for your application. Which AWS service can automatically distribute incoming application traffic across multiple AWS resources?

A) AWS CloudFront
B) AWS Elastic Beanstalk
C) AWS Global Accelerator
D) AWS Auto Scaling

A

C) AWS Global Accelerator

AWS Global Accelerator is a service that improves the availability and performance of your applications by directing traffic to optimal endpoints across multiple AWS regions. It uses the AWS global network infrastructure to route traffic to your resources, helping achieve high availability and low latency for your applications.

210
Q

You have a web application that requires authentication and authorization for its users. Which AWS service can be used to manage user identities and provide secure access to your application?

A) AWS Cognito
B) AWS Secrets Manager
C) AWS Single Sign-On (SSO)
D) AWS Identity and Access Management (IAM)

A

A) AWS Cognito

AWS Cognito is a fully managed service that enables you to add user sign-up, sign-in, and access control to your web and mobile applications. It provides secure authentication, authorization, and user management features, allowing you to easily authenticate users through various identity providers, such as social media or corporate directories.

211
Q

You need to secure your Amazon S3 bucket to ensure that all objects stored in it are encrypted. Which option ensures that any object uploaded to the bucket is automatically encrypted with a unique key?

A) Enable default encryption for the S3 bucket
B) Apply an S3 bucket policy to enforce encryption
C) Use a bucket ACL to enforce encryption
D) Use S3 bucket versioning to encrypt objects

A

A) Enable default encryption for the S3 bucket

By enabling default encryption for an Amazon S3 bucket, you ensure that any new object uploaded to the bucket is automatically encrypted with a unique key. This setting provides an additional layer of security and helps enforce encryption for all objects stored in the bucket, even if the encryption settings are not explicitly specified during the object upload.

212
Q

You have a requirement to secure your application by scanning it for vulnerabilities and configuration issues. Which AWS service can help you achieve this?

A) Amazon Inspector
B) AWS Config
C) AWS Security Hub
D) AWS Shield

A

A) Amazon Inspector

Amazon Inspector is an automated security assessment service that helps you test the security state of your applications and resources. It performs security assessments by scanning for vulnerabilities, security best practices, and common configuration issues. Amazon Inspector provides detailed findings and recommendations to help you improve the security posture of your applications.

213
Q

Which AWS service provides a fully managed key management service that allows you to create and control the encryption keys used to encrypt your data?

A) AWS CloudHSM
B) AWS Secrets Manager
C) AWS Key Management Service (KMS)
D) AWS Certificate Manager (ACM)

A

C) AWS Key Management Service (KMS)

AWS Key Management Service (KMS) is a fully managed service that allows you to create and control the encryption keys used to encrypt your data. It integrates with various AWS services, such as Amazon S3 and Amazon EBS, to provide seamless encryption at rest. KMS provides secure key storage and management, making it an appropriate choice for data security controls.

214
Q

You want to securely store and manage secrets such as database credentials and API keys. Which AWS service is specifically designed for this purpose?

A) AWS CloudHSM
B) AWS Secrets Manager
C) AWS Identity and Access Management (IAM)
D) AWS Directory Service

A

B) AWS Secrets Manager

AWS Secrets Manager is a service that helps you securely store and manage secrets such as database credentials, API keys, and other sensitive information. It provides a central location to store secrets, allows for automatic rotation of secrets, and integrates with AWS services and applications securely.

215
Q

You have a requirement to secure your Amazon S3 bucket by allowing access only to specific IP addresses. Which AWS service can help you achieve this?

A) Amazon VPC (Virtual Private Cloud)
B) AWS CloudTrail
C) AWS WAF (Web Application Firewall)
D) AWS Firewall Manager

A

A) Amazon VPC (Virtual Private Cloud)

Amazon VPC (Virtual Private Cloud) allows you to create a logically isolated section of the AWS Cloud, where you can define IP address ranges and configure security groups and network ACLs. By configuring the appropriate network settings within your VPC, you can control access to your Amazon S3 bucket by allowing access only from specific IP addresses or IP ranges.

216
Q

You need to encrypt data in transit between your on-premises data center and AWS. Which service can provide a secure and private connection for this scenario?

A) AWS Direct Connect
B) Amazon CloudFront
C) AWS Transit Gateway
D) AWS Site-to-Site VPN

A

A) AWS Direct Connect

AWS Direct Connect provides a dedicated network connection between your on-premises data center and AWS. It establishes a private, secure, and high-bandwidth connection that bypasses the public internet, ensuring the encryption and privacy of data in transit between the two environments.

217
Q

You want to implement data encryption at rest for your Amazon RDS database instances. Which encryption option is provided by AWS for this purpose?

A) Server-Side Encryption with AWS Key Management Service (SSE-KMS)
B) Client-Side Encryption
C) SSL/TLS Encryption
D) Network Encryption

A

A) Server-Side Encryption with AWS Key Management Service (SSE-KMS)

AWS provides Server-Side Encryption with AWS Key Management Service (SSE-KMS) as an option for encrypting data at rest for Amazon RDS database instances. With SSE-KMS, the data is encrypted using keys managed by AWS KMS, providing a secure and scalable solution for protecting sensitive data stored in the database.

218
Q

Which AWS service allows you to decouple and scale microservices-based architectures?

A) Amazon Simple Queue Service (SQS)
B) AWS Lambda
C) Amazon Simple Notification Service (SNS)
D) AWS Step Functions

A

A) Amazon Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices-based architectures. It allows you to decouple the components of your application by providing a reliable and scalable messaging platform for exchanging messages between different services, helping achieve loose coupling and scalability

219
Q

You have a requirement to process large amounts of data in real-time with low latency. Which AWS service is best suited for this scenario?

A) Amazon Redshift
B) Amazon Kinesis
C) Amazon Simple Storage Service (S3)
D) AWS Glue

A

B) Amazon Kinesis

Amazon Kinesis is a fully managed service for real-time streaming data processing. It can handle large amounts of data and provide low-latency processing, making it suitable for scenarios such as real-time analytics, IoT data ingestion, and log processing.

220
Q

You need to design a scalable architecture that can handle sudden traffic spikes. Which AWS service can automatically scale your application resources based on predefined policies?

A) AWS Elastic Beanstalk
B) AWS Lambda
C) Amazon EC2 Auto Scaling
D) AWS Fargate

A

C) Amazon EC2 Auto Scaling

Amazon EC2 Auto Scaling allows you to automatically scale your Amazon EC2 instances based on predefined policies and conditions. It helps your application handle sudden traffic spikes by automatically adding or removing instances to meet demand, ensuring scalability and optimal resource utilization.

221
Q

You are designing a highly available architecture for your application. Which AWS service can provide automatic failover between regions in the event of a service disruption?

A) Amazon CloudFront
B) AWS Global Accelerator
C) AWS Direct Connect
D) Amazon Route 53

A

D) Amazon Route 53

Amazon Route 53 is a scalable domain name system (DNS) web service that can provide automatic failover between regions in the event of a service disruption. By configuring health checks and DNS failover policies, Route 53 can route traffic to an alternate region if the primary region becomes unavailable, ensuring high availability and fault tolerance.

222
Q

You have a requirement to process large-scale data workflows with complex dependencies. Which AWS service can help you orchestrate and manage these workflows?

A) Amazon EMR (Elastic MapReduce)
B) AWS Glue
C) AWS Batch
D) AWS Step Functions

A

D) AWS Step Functions

AWS Step Functions is a fully managed service that helps you orchestrate and manage complex workflows for processing large-scale data. It allows you to coordinate and visualize the different steps and dependencies of your workflow, making it easier to design and manage scalable data processing pipelines.

223
Q

You need to design a highly available architecture for your application that can withstand the failure of an entire AWS Availability Zone. Which AWS service can help you achieve this?

A) AWS Elastic Load Balancer (ELB)
B) Amazon CloudFront
C) AWS Global Accelerator
D) Amazon Route 53

A

C) AWS Global Accelerator

AWS Global Accelerator is a service that helps improve the availability and performance of your applications by directing traffic to optimal endpoints across multiple AWS regions. In the event of an Availability Zone failure, AWS Global Accelerator can automatically route traffic to the healthy instances in other Availability Zones, ensuring high availability and fault tolerance.

224
Q

You are working on a Serverless application where you want to process objects uploaded to an S3 bucket. You have configured S3 Events on your S3 bucket to invoke a Lambda function every time an object has been uploaded. You want to ensure that events that can’t be processed are sent to a Dead Letter Queue (DLQ) for further processing. Which AWS service should you use to set up the DLQ?

1) S3 Events
2) SNS Topic
3) Lambda Function

A

3) Lambda Function

The Lambda function’s invocation is “asynchronous”, so the DLQ has to be set on the Lambda function side

225
Q

As a Solutions Architect, you have created an architecture for a company that includes the following AWS services: CloudFront, Web Application Firewall (AWS WAF), AWS Shield, Application Load Balancer, and EC2 instances managed by an Auto Scaling Group. Sometimes the company receives malicious requests and wants to block these IP addresses. According to your architecture, Where should you do it?

1) CloudFront
2) AWS WAF
3) AWS Shield
4) ALB Security Group
5) EC2 Instance Security Group
6) NACL

A

2) AWS WAF

226
Q

You have a 25 GB file that you’re trying to upload to S3 but you’re getting errors. What is a possible solution for this?

1) The file size limit on S3 is 5GB
2) Update your bucket policy to allow the larger file
3) Use Multi-Part upload when uploading files larger than 5GB
4) Encrypt this file

A

3) Use Multi-Part upload when uploading files larger than 5GB

Multi-Part Upload is recommended as soon as the file is over 100 MB

227
Q

You’re getting errors while trying to create a new S3 bucket named “dev”. You’re using a new AWS Account with no S3 buckets created before. What is a possible cause for this?

1) You’re missing IAM permissions to create an S3 bucket
2) S3 bucket names must be globally unique and “dev” is already taken

A

2) S3 bucket names must be globally unique and “dev” is already taken

228
Q

You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have?

1) 1
2) 0
3) -1
4) null

A

4) null

229
Q

You have updated an S3 bucket policy to allow IAM users to read/write files in the S3 bucket, but one of the users complain that he can’t perform a PutObject API call. What is a possible cause for this?

1) The S3 bucket policy must be wrong
2) The user is lacking permissions
3) The IAM user must have an explicit DENY in the attached IAM policy
4) You need to contact AWS Support to lift this limit

A

3) The IAM user must have an explicit DENY in the attached IAM policy

Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.

230
Q

You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use?

1) Amazon CloudFront Distributions
2) S3 Versioning
3) S3 Static Website Hosting
4) S3 Replication

A

4) S3 Replication

S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region

231
Q

You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this?

1) Configure replication from bucket A to bucket B, then from bucket A to bucket C
2) Configure replication from bucket A to bucket B, then from bucket B to bucket C
3) Configure replication from bucket A to bucket C, then from bucket C to bucket B

A

1) Configure replication from bucket A to bucket B, then from bucket A to bucket C

232
Q

Which of the following is NOT a Glacier Deep Archive retrieval mode?

1) Expedited (1-5 minute)
2) Standard (12 hours)
3) Bulk (48 hours)

A

1) Expedited (1-5 minute)

233
Q

Which of the following is NOT a Glacier Flexible retrieval mode?

1) Instant (10 seconds)
2) Expedited (1-5 minutes)
3) Standard (3-5 hours)
4) Bulk (5-12 hours)

A

1) Instant (10 seconds)

234
Q

How can you be notified when there’s an object uploaded to your S3 bucket?

1) S3 Select
2) S3 Access Logs
3) S3 Event Notifications
4) S3 Analytics

A

3) S3 Event Notifications

235
Q

You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions?

1) S3 Lifecycle Rules - Transition Actions
2) S3 Lifecycle Rules - Expiration Actions
3) S3 Access Logs

A

2) S3 Lifecycle Rules - Expiration Actions

236
Q

How can you automate the transition of S3 objects between their different tiers?

1) AWS Lambda
2) CloudWatch Events
3) S3 Lifecycle Rules

A

3) S3 Lifecycle Rules

237
Q

While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using these unfinished parts and they cost you money. What is the best approach to remove these unfinished parts?

1) Use AWS Lambda to loop on each old/unfinished part and delete them
2) Request AWS Support to help you delete old/unfinished parts
3) Use an S3 Lifecycle Policy to automate old/unfinished parts deletion

A

3) Use an S3 Lifecycle Policy to automate old/unfinished parts deletion

238
Q

You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers?

1) S3 Inventory
2) S3 Analytics
3) S3 Lifecycle Rules Advisor

A

2) S3 Analytics

239
Q

You are looking to build an index of your files in S3, using Amazon RDS PostgreSQL. To build this index, it is necessary to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, amounting to 50 TB of data. How can you build this index efficiently?

1) Use the RDS Import feature to load the data from S3 to PostgreSQL, and run a SQL query to build the index
2) Create an application that will traverse the S3 bucket, read all the files one by one, extract the first 250 bytes, and store that information in RDS
3) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes and store that information in RDS
4) Create an application that will traverse the S3 bucket, use S3 Select to get the first 250 bytes and store that information in RDS

A

3) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes and store that information in RDS

239
Q

You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10 GB files. You have good bandwidth but your Internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the Internet connection?

1) Use Multi-part Upload only
2) Use S3 Select & Use S3 Transfer Acceleration
3) Use Multi-part Upload & S3 Transfer Acceleration

A

3) Use Multi-part Upload & S3 Transfer Acceleration

240
Q

You would like to retrieve a subset of your dataset stored in S3 with the .csv format. You would like to retrieve a month of data and only 3 columns out of 10, to minimize compute and network costs. What should you use?

1) S3 Analytics
2) S3 Access Logs
3) S3 Select
4) S3 Inventory

A

3) S3 Select

241
Q

A company is preparing for compliance and regulatory review on its infrastructure on AWS. Currently, they have their files stored on S3 buckets that are not encrypted, which must be encrypted as required for compliance and regulatory review. Which S3 feature allows them to encrypt all files in their S3 buckets in the most efficient and cost-effective way?

1) S3 Access Points
2) S3 Cross-Region Replication
3) S3 Batch Operations
4) S3 Lifecylce Rules

A

3) S3 Batch Operations

242
Q

Your client wants to make sure that file encryption is happening in S3, but he wants to fully manage the encryption keys and never store them in AWS. You recommend him to use ……………………….

1) SSE-S3
2) SSE-KMS
3) SSE-C
4) Client-Side Encryption

A

3) SSE-C

With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.

243
Q

A company you’re working for wants their data stored in S3 to be encrypted. They don’t mind the encryption keys stored and managed by AWS, but they want to maintain control over the rotation policy of the encryption keys. You recommend them to use ………………..

1) SSE-S3
2) SSE-KMS
3) SSE-C
4) Client-Side Encryption

A

2) SSE-KMS

With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.

244
Q

Your company does not trust AWS for the encryption process and wants it to happen on the application. You recommend them to use ………………..

1) SSE-S3
2) SSE-KMS
3) SSE-C
4) Client-Side Encryption

A

4) Client-Side Encryption

With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You perform the encryption yourself and send the encrypted data to AWS. AWS does not know your encryption keys and cannot decrypt your data.

245
Q

You have a website that loads files from an S3 bucket. When you try the URL of the files directly in your Chrome browser it works, but when a website with a different domain tries to load these files it doesn’t. What’s the problem?

1) The Bucket policy is wrong
2) The IAM policy is wrong
3) CORS is wrong
4) Encryption is wrong

A

3) CORS is wrong

Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. To learn more about CORS, go here: https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

246
Q

An e-commerce company has its customers and orders data stored in an S3 bucket. The company’s CEO wants to generate a report to show the list of customers and the revenue for each customer. Customer data stored in files on the S3 bucket has sensitive information that we don’t want to expose in the report. How do you recommend the report can be created without exposing sensitive information?

1) Use S3 Object Lambda to change the objects before they are retrieved by the report
2) Create another S3 bucket. Create a lambda function to process each file, remove the sensitive information, and then move them to the new S3 bucket
3) Use S3 Object Lock to lock the sensitive information from being fetched by the report generator application

A

1) Use S3 Object Lambda to change the objects before they are retrieved by the report

247
Q

For compliance reasons, your company has a policy mandate that database backups must be retained for 4 years. It shouldn’t be possible to erase them. What do you recommend?

1) Glacier Vaults with Vault Lock Policies
2) EFS network drives with restrictive Linux permissions
3) S3 with Bucket Policies

A

1) Glacier Vaults with Vault Lock Policies

248
Q

You suspect that some of your employees try to access files in an S3 bucket that they don’t have access to. How can you verify this is indeed the case without them noticing?

1) Enable S3 Access Logs and analyze them using Athena
2) Restrict their IAM policies and look at CloudTail logs
3) Use a bucket policy

A

1) Enable S3 Access Logs and analyze them using Athena

S3 Access Logs log all the requests made to S3 buckets and Amazon Athena can then be used to run serverless analytics on top of the log files.

249
Q

You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use?

1) S3 CORS
2) S3 Pre-Signed URL
3) S3 Bucket Policies

A

2) S3 Pre-Signed URL

S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.

250
Q

You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this?

1) Use a bucket policy that forces HTTPS connections
2) Do nothing, Amazon S3 automatically encrypt new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3)
3) Enable Versioning

A

2) Do nothing, Amazon S3 automatically encrypt new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3)

251
Q

You have enabled versioning and want to be extra careful when it comes to deleting files on an S3 bucket. What should you enable to prevent accidental permanent deletions?

1) Use a bucket policy
2) Enable MFA Delete
3) Encrypt the files
4) Disable versioning

A

2) Enable MFA Delete

MFA Delete forces users to use MFA codes before deleting S3 objects. It’s an extra level of security to prevent accidental deletions.

252
Q

A company has its data and files stored on some S3 buckets. Some of these files need to be kept for a predefined period of time and protected from being overwritten and deletion according to company compliance policy. Which S3 feature helps you in doing this?

1) S3 Object Lock - Retention Governance Mode
2) S3 Versioning
3) S3 Object Lock - Retention Compliance Mode
4) S3 Glacier Vault Lock

A

3) S3 Object Lock - Retention Compliance Mode

253
Q

Which of the following S3 Object Lock configuration allows you to prevent an object or its versions from being overwritten or deleted indefinitely and gives you the ability to remove it manually?

1) Retention Governance Mode
2) Retention Compliance Mode
3) Legal Hold

A

3) Legal Hold

254
Q

Amazon RDS supports the following databases, EXCEPT:

1) MongoDB
2) MySQL
3) MariaDB
4) Microsoft SQL Server

A

1) MongoDB

255
Q

You’re planning for a new solution that requires a MySQL database that must be available even in case of a disaster in one of the Availability Zones. What should you use?

1) Create Read Replicas
2) Enable Encryption
3) Enable Muti-AZ

A

3) Enable Muti-AZ

Multi-AZ helps when you plan a disaster recovery for an entire AZ going down. If you plan against an entire AWS Region going down, you should use backups and replication across AWS Regions.

256
Q

We have an RDS database that struggles to keep up with the demand of requests from our website. Our million users mostly read news, and we don’t post news very often. Which solution is NOT adapted to this problem?

1) An ElastiCache Cluster
2) RDS Multi-AZ
3) RDS Read Replicas

A

RDS Multi-AZ

Be very careful with the way you read questions at the exam. Here, the question is asking which solution is NOT adapted to this problem. ElastiCache and RDS Read Replicas do indeed help with scaling reads.

257
Q

You have set up read replicas on your RDS database, but users are complaining that upon updating their social media posts, they do not see their updated posts right away. What is a possible cause for this?

1) There must be a bug in your application
2) Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency
3) You should have setup Multi-AZ instead

A

2) Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency

258
Q

Which RDS (NOT Aurora) feature when used does not require you to change the SQL connection string?

1) Multi-AZ
2) Read Replicas

A

1) Multi-AZ

Multi-AZ keeps the same connection string regardless of which database is up.

259
Q

Your application running on a fleet of EC2 instances managed by an Auto Scaling Group behind an Application Load Balancer. Users have to constantly log back in and you don’t want to enable Sticky Sessions on your ALB as you fear it will overload some EC2 instances. What should you do?

1) Use your own custom Load Balancer on EC2 instances instead of using ALB
2) Store session data in RDS
3) Store session data in ElastiCache
4) Store session data in a shared EBS volume

A

3) Store session data in ElastiCache

Storing Session Data in ElastiCache is a common pattern to ensuring different EC2 instances can retrieve your user’s state if needed.

260
Q

An analytics application is currently performing its queries against your main production RDS database. These queries run at any time of the day and slow down the RDS database which impacts your users’ experience. What should you do to improve the users’ experience?

1) Setup a Read Replica
2) Setup a Multi-AZ
3) Run the analytics queries at night

A

1) Setup a Read Replica

261
Q

You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?

1) RDS Read Replicas
2) RDS Multi-AZ
3) Aurora Read Replicas
4) Aurora Global Database

A

4) Aurora Global Database

Aurora Global Databases allows you to have an Aurora Replica in another AWS Region, with up to 5 secondary regions.

262
Q

How can you enhance the security of your ElastiCache Redis Cluster by allowing users to access your ElastiCache Redis Cluster using their IAM Identities (e.g., Users, Roles)?

1) Using Redis Authentication
2) Using IAM Authentication
3) Using Security Groups

A

2) Using IAM Authentication

263
Q

Your company has a production Node.js application that is using RDS MySQL 5.6 as its database. A new application programmed in Java will perform some heavy analytics workload to create a dashboard on a regular hourly basis. What is the most cost-effective solution you can implement to minimize disruption for the main application?

1) Enable Multi-AZ of the RDS database and run the analytics workload on the standby database
2) Create a Read Replica in a different AZ and run the analytics workload on the standby database
3) Create a Read Replica in a different AZ and run the analytics workload on the source database

A

2) Create a Read Replica in a different AZ and run the analytics workload on the standby database

264
Q

You would like to create a disaster recovery strategy for your RDS PostgreSQL database so that in case of a regional outage the database can be quickly made available for both read and write workloads in another AWS Region. The DR database must be highly available. What do you recommend?

1) Create a Read Replica in the same region and enable Multi-AZ on the main database
2) Create a Read Replica in a different region and enable Multi-AZ on the Read Replica
3) Create a Read Replica in the same region and enable Multi-AZ on the Read Replica
4) Enable Multi-Region option on the main database

A

2) Create a Read Replica in a different region and enable Multi-AZ on the Read Replica

265
Q

You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company’s AWS account. What is a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one?

1) By default IAM users have access to your RDS database
2) Use Amazon Cognito
3) Enable IAM Database Authentication

A

3) Enable IAM Database Authentication

266
Q

Which of the following statement is true regarding replication in both RDS Read Replicas and Multi-AZ?

1) Read Replica uses Asynchronous Replication and Multi-AZ uses Asynchronous Replication
2) Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication
3) Read Replica uses Synchronous Replication and Multi-AZ uses Synchronous Replication
4) Read Replica uses Synchronous Replication and Multi-AZ uses Asynchronous Replication

A

2) Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication

267
Q

How do you encrypt an unencrypted RDS DB instance?
1) Do it straight from the AWS Console, select your RDS DB instance, choose Actions then Encrypt using KMS
2) Do it straight from AWS Console, after stopping the RDS instance
3) Create a snapshot of the unencrypted RDS DB instance, copy the snapshot and tick “Enable encryption”, then restore the RDS DB instance from the encrypted snapshot

A

3) Create a snapshot of the unencrypted RDS DB instance, copy the snapshot and tick “Enable encryption”, then restore the RDS DB instance from the encrypted snapshot

268
Q

For your RDS database, you can have up to ………… Read Replicas.
1) 5
2) 15
3) 7

A

2) 15

269
Q

Which RDS database technology does NOT support IAM Database Authentication?
1) Oracle
2) PostgreSQL
3) MySQL

A

1) Oracle

270
Q

You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted?

Yes
No

A

No

You can not create encrypted Read Replicas from an unencrypted RDS DB instance.

271
Q

An application running in production is using an Aurora Cluster as its database. Your development team would like to run a version of the application in a scaled-down application with the ability to perform some heavy workload on a need-basis. Most of the time, the application will be unused. Your CIO has tasked you with helping the team to achieve this while minimizing costs. What do you suggest?

1) Use an Aurora Global Database
2) Use an RDS database
3) Use Aurora Serverless
4) Run Aurora on EC2, and write a script to shut down the EC2 instance at night

A

3) Use Aurora Serverless

272
Q

How many Aurora Read Replicas can you have in a single Aurora DB Cluster?

1) 5
2) 10
3) 15

A

3) 15

273
Q

Amazon Aurora supports both …………………….. databases.

1) MySQL and MariaDB
2) MySQL and PostgreSQL
3) Oracle and MariaDB
4) Oracle and MS SQL Server

A

2) MySQL and PostgreSQL

274
Q

You work as a Solutions Architect for a gaming company. One of the games mandates that players are ranked in real-time based on their score. Your boss asked you to design then implement an effective and highly available solution to create a gaming leaderboard. What should you use?

1) Use RDS for MySQL
2) Use an Amazon Aurora
3) Use ElastiCache for Memcached
4) Use ElastiCache for Redis - Sorted Sets

A

4) Use ElastiCache for Redis - Sorted Sets

275
Q

You need full customization of an Oracle Database on AWS. You would like to benefit from using the AWS services. What do you recommend?

1) RDS for Oracle
2) RDS Custom for Oracle
3) Deploy Oracle on EC2

A

2) RDS Custom for Oracle

276
Q

You need to store long-term backups for your Aurora database for disaster recovery and audit purposes. What do you recommend?

1) Enable Automated Backups
2) Perform On Demand Backups
3) Use Aurora Database Cloning

A

2) Perform On Demand Backups

277
Q

Your development team would like to perform a suite of read and write tests against your production Aurora database because they need access to production data as soon as possible. What do you advise?

1) Create an Aurora Read Replica for them
2) Do the test against the production database
3) Make a DB Snapshot and Restore it into a new database
4) Use the Aurora Cloning feature

A

4) Use the Aurora Cloning feature

278
Q

You have 100 EC2 instances connected to your RDS database and you see that upon a maintenance of the database, all your applications take a lot of time to reconnect to RDS, due to poor application logic. How do you improve this?

1) Fix all the applications
2) Disable Multi-AZ
3) Enable Multi-AZ
4) Use an RDS Proxy

A

4) Use an RDS Proxy

This reduces the failover time by up to 66% and keeps connection actives for your applications

279
Q

You should use Amazon Transcribe to turn text into lifelike speech using deep learning.

True
False

A

False

Amazon Transcribe is an AWS service that makes it easy for customers to convert speech-to-text. Amazon Polly is a service that turns text into lifelike speech.

280
Q

A company would like to implement a chatbot that will convert speech-to-text and recognize the customers’ intentions. What service should it use?

1) Transcribe
2) Rekognition
3) Connect
4) Lex

A

4) Lex

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.

281
Q

Which fully managed service can deliver highly accurate forecasts?

1) Personalize
2) SageMaker
3) Lex
4) Forecast

A

4) Forecast

Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts.

282
Q

You would like to find objects, people, text, or scenes in images and videos. What AWS service should you use?

1) Rekognition
2) Polly
3) Kendra
4) Lex

A

1) Rekognition

Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use.

283
Q

A start-up would like to rapidly create customized user experiences. Which AWS service can help?

1) Personalize
2) Kendra
3) Connect

A

1) Personalize

Amazon Personalize is a machine learning service that makes it easy for developers to create individualized recommendations for customers using their applications.

284
Q

A research team would like to group articles by topics using Natural Language Processing (NLP). Which service should they use?

1) Translate
2) Comprehend
3) Lex
4) Rekognition

A

2) Comprehend

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text.

285
Q

A company would like to convert its documents into different languages, with natural and accurate wording. What should they use?

1) Transcribe
2) Polly
3) Translate
4) WordTranslator

A

3) Translate

Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation.

286
Q

A developer would like to build, train, and deploy a machine learning model quickly. Which service can he use?

1) SageMaker
2) Polly
3) Comprehend
4) Personalize

A

1) SageMaker

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.

287
Q

Which AWS service makes it easy to convert speech-to-text?

1) Connect
2) Translate
3) Transcribe
4) Polly

A

3) Transcribe

Amazon Transcribe is an AWS service that makes it easy for customers to convert speech-to-text.

288
Q

Which of the following services is a document search service powered by machine learning?

1) Forecast
2) Kendra
3) Comprehend
4) Polly

A

2) Kendra

Amazon Kendra is a highly accurate and easy to use enterprise search service that’s powered by machine learning.

289
Q

A company is managing an image and video sharing platform which is used by customers around the globe. The platform is running on AWS using an S3 bucket to host both images and videos and using CloudFront as the CDN to deliver content to customers all over the world with low latency. In the last couple of months, a lot of customers have complained that they have started to see inappropriate content on the platform which started to increase in the last week. It will be very expensive and time-consuming to manually approve those images and videos by employees before its published on the platform. There is a requirement to find a solution that can automatically detect inappropriate and offensive images and videos and give you the ability to set a minimum confidence threshold for items that will be flagged and allows for manual review. Which AWS service can fit the requirement?

1) Amazon Polly
2) Amazon Translate
3) Amazon Lex
4) Amazon Rekognition

A

4) Amazon Rekognition

290
Q

An online medical company that allows you to book an appointment with doctors using through a phone call is using AWS to host their infrastructure. They are using Amazon Connect and Amazon Lex to receive calls and create a workflow, book an appointment, and pay. According to the company’s policy, all calls must be recorded for review. But, there is a requirement to remove any Personally Identifiable Information (PII) from the call before it’s saved. What do you recommend to use which helps in removing PII from calls?

1) Amazon Polly
2) Amazon Transcribe
3) Amazon Rekognition
4) Amazon Forecast

A

2) Amazon Transcribe

291
Q

Amazon Polly allows you to turn text into speech. It has two important features. First is ……………….. which allows you to customize the pronunciation of words (e.g., “Amazon EC2” will be “Amazon Elastic Compute Cloud”). The second is ……………….. which allows you to emphasize words, including breathing sounds, whispering, and more.

1) Speech Synthesis Markup Language (SSML), Pronunciation Lexicons
2) Pronunciation Lexicons, Security Assertion Markup Language (SAML)
3) Pronunciation Lexicons, Speech Synthesis Markup Language (SSML)
4) Security Assertion Markup Language (SAML), Pronunciation Lexicons

A

3) Pronunciation Lexicons, Speech Synthesis Markup Language (SSML)

292
Q

A medical company is in the process of implementing a solution to detect, extract, and analyze information from unstructured medical text like doctors’ notes, clinical trial reports, and radiology reports. Those documents are uploaded and stored on S3 buckets. According to the company’s regulations, the solution must be designed and implemented to keep patients’ privacy by identifying Protected Health Information (PHI) so the solution will be eligible with HIPAA. Which AWS service should you use?

1) Amazon Comprehend Medical
2) Amazon Rekognition
3) Amazon Polly
4) Amazon Translate

A

1) Amazon Comprehend Medical

293
Q

Which AWS Service analyzes your AWS account and gives recommendations for cost optimization, performance, security, fault tolerance, and service limits?

1) AWS Trusted Advisor
2) AWS CloudTrail
3) AWS IAM
4) AWS CloudFormation

A

1) AWS Trusted Advisor

AWS Trusted Advisor provides recommendations that help you follow AWS best practices. It evaluates your account by using checks. These checks identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas.

294
Q

Your company has received results back from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. It has been directed internally that you must have the ability to manage dedicated hardware security module instances to generate and store your encryption keys. Which service could you use to meet this requirement?

1) Amazon EBS encryption
2) AWS CloudHSM
3) AWS Security Token Service
4) AMS KMS

A

2) AWS CloudHSM

The AWS CloudHSM service helps you meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. A Hardware Security Module (HSM) provides secure key storage and cryptographic operations within a tamper-resistant hardware device. HSMs are designed to securely store cryptographic key material and use the key material without exposing it outside the cryptographic boundary of the hardware. You should use AWS CloudHSM when you need to manage the HSMs that generate and store your encryption keys. In AWS CloudHSM, you create and manage HSMs, including creating users and setting their permissions. You also create the symmetric keys and asymmetric key pairs that the HSM stores.

295
Q

You work for an Australian company that is undergoing an audit and requires compliance reports for its AWS-hosted applications. Specifically, you need to obtain an Australian Hosting Certification Framework - Strategic Certification certificate promptly. What steps should you take to accomplish this quickly?

1) Use Amazon Detective to generate the report
2) Use AWS Certificate Manager to generate the certificate
3) Use AWS Trusted Advisor
4) Use AWS Artifact to download the certificate

A

4) Use AWS Artifact to download the certificate

AWS Artifact is a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements

296
Q

A financial tech company has decided to begin migrating their applications to the AWS cloud. Currently, they host their entire application using several self-managed Kubernetes clusters. One of their major concerns during this migration is monitoring and collecting system metrics due to the very large-scale deployments that are in place. Your Chief Technology Officer has requested the use of open-source technologies for this implementation but has also stipulated that, with the current workload of the team, the ability to manage the monitoring environment needs to be low-maintenance. Which combination of the following AWS services would best fit the company requirements while minimizing operational overhead? (Choose 2)

1) Granfana on Auto Scaling EC2 Instances
2) AWS Managed Service for Prometheus
3) AWS Managed Grafana
4) AWS Config
5) Prometheus on Auto Scaling EC2 Instances

A

2) AWS Managed Service for Prometheus
3) AWS Managed Grafana

Prometheus offers open-source monitoring. Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service for container metrics. It is perfect for monitoring Kubernetes clusters at scale.
Grafana is a well-known open-source analytics and monitoring application. Amazon Managed Grafana offers a fully managed service for infrastructure for data visualizations. You can leverage this service to query, correlate, and visualize operational metrics from multiple sources.

While Prometheus and Grafana is a popular open-source monitoring tool, running it on EC2 instances immediately adds operational overhead and cost to the solution.

297
Q

A new startup is considering the advantages of using Amazon DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favor Amazon DynamoDB? (Choose 3)

1) Strong referential integrity between tables
2) Storing metadata for S3 objects
3) Managing web session data
4) High-performance reads and writes for online transaction workloads
5) Storing binary large object (BLOB) data
6) Online analytical processing (OLAP)/data warehouse

A

2) Storing metadata for S3 objects
3) Managing web session data
4) High-performance reads and writes for online transaction workloads

In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.

Amazon DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data. Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.

High-performance reads and writes are easy to manage with Amazon DynamoDB, and you can expect performance that is effectively constant across widely varying loads

298
Q

A solutions architect has been assigned the task of helping the company developers optimize the performance of their web application. End users have been complaining about slow response times. The solutions architect has determined that improvements can be realized by adding ElastiCache to the solution. What can ElastiCache do to improve performance?

1) Deliver up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second
2) Queue up requests and allow the processor time to catch-up
3) Offload some of the write traffic to the database
4) Cache frequently accessed data in-memory

A

4) Cache frequently accessed data in-memory

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source-compatible, in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like caching, session stores, gaming, geospatial services, real-time analytics, and queuing

299
Q

Your company is in the process of creating a multi-region disaster recovery solution for your database, and you have been tasked to implement it. The required RTO is 1 hour, and the RPO is 15 minutes. What steps can you take to ensure these thresholds are met?

1) Use RDS to host your database. Enable the Multi-AZ option for your database. In the event of a failure, cut over to the secondary database.
2) Use RDS to host your database. Create a cross-region read replica of your database. In the event of a failure, promote the read replica to be a standalone database. Send new reads and writes to this database.
3) Use Redshift to host your database. Enable “multi-region” failover with Redshift. In the event of a failure, do nothing, as Redshift will handle it for you.
4) Take EBS snapshots of the required EC2 instances nightly. In the event of a disaster, restore the snapshots to another region.

A

2) Use RDS to host your database. Create a cross-region read replica of your database. In the event of a failure, promote the read replica to be a standalone database. Send new reads and writes to this database.

While 1 is a great option for high availability within a region, it won’t meet cross-region requirements

It’s important to note that while read replicas are often used for load balancing and read-heavy workloads, they are also a valuable component in a disaster recovery strategy, especially in scenarios requiring quick failover to a different region.

300
Q

An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?

1) The instance launched from the oldest launch configuration.
2) The Auto Scaling Group will randomly select an instance to terminate.
3) The longest running instance.
4) The instance for which the load balancer stops sending traffic.

A

1) The instance launched from the oldest launch configuration.

When the Auto Scaling Group (ASG) is configured with the Default Termination Policy and a scale-in event occurs (i.e., when the number of instances needs to be reduced), AWS follows a specific sequence to determine which instance to terminate. The “Least Busy Instances” criterion comes into play in this sequence. Here’s how it works:

Balance Across Availability Zones: The policy first ensures that the instances are balanced across Availability Zones. It targets the zone with the most instances for scale-in.

Least Busy Instances: Within the selected Availability Zone, AWS then targets the instances that are considered “least busy.” This is determined based on two factors:

Network Performance: AWS examines the network in and out from/to each instance. Instances with lower network use are considered less busy.
Instance Usage: AWS also looks at the overall instance usage, which could include factors like CPU utilization and disk I/O, to assess which instances are least active or busy.
Oldest Launch Configuration or Template: If multiple instances have similar usage levels, the policy then considers the age of the launch configuration or template used by the instances.

Closest to the Next Billing Hour: Finally, among the remaining instances, AWS selects the one closest to the next billing hour.

301
Q

Your team has provisioned Auto Scaling groups in a single Region. The Auto Scaling groups, at max capacity, would total 40 EC2 On-Demand Instances between them. However, you notice that the Auto Scaling groups will only scale out to a portion of that number of instances at any one time. What could be the problem?

1) You can have only 20 instances per Region. This is a hard limit
2) There is a vCPU-based On-Demand Instance limit per Region
3) The associated load balancer can serve only 20 instances at one time
4) You can have only 20 instances per AZ

A

2) There is a vCPU-based On-Demand Instance limit per Region

While there is a limit of 20 instances per region (no specific limit for AZ), it is not a hard limit. This limit is raised automatically by AWS based on your usage. Larger limit increase requests may require approval from AWS support. Limits are imposed to ensure fair access to resources for all customers and prevent overutilization. But the limits are not fixed and are designed to accommodate growing usage.

302
Q

Your company needs to shift an application to the cloud. You are looking for a solution to collect, process, gain immediate insight, and then transfer the application data to AWS. Part of this effort also includes moving a large data warehouse into AWS. The warehouse is 50TB, and it would take over a month to migrate the data using the current bandwidth available. What is the best option available to perform this one time migration considering both cost and performance aspects?

1) AWS Snowball Edge
2) AWS SnowMobile
3) AWS Direct Connect
4) AWS VPN

A

1) AWS Snowball Edge

Direct Connect is more suited to a long-term solution rather than a one time operation.

The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.

302
Q

As part of your Disaster Recovery plan, you would like to have only the critical infrastructure up and running in AWS. You don’t mind a longer Recovery Time Objective (RTO). Which DR strategy do you recommend?

1) Backup and Restore
2) Pilot Light
3) Warm Standby
4) Multi-Site

A

2) Pilot Light

303
Q

You would like to get the Disaster Recovery strategy with the lowest Recovery Time Objective (RTO) and Recovery Point Objective (RPO), regardless of the cost. Which DR should you choose?

1) Backup and Restore
2) Pilot Light
3) Warm Standby
4) Multi-Site

A

4) Multi-Site

304
Q

Which of the following Disaster Recovery strategies has a potentially high Recovery Point Objective (RPO) and Recovery Time Objective (RTO)?

1) Backup and Restore
2) Pilot Light
3) Warm Standby
4) Multi-Site

A

1) Backup and Restore

305
Q

You want to make a Disaster Recovery plan where you have a scaled-down version of your system up and running, and when a disaster happens, it scales up quickly. Which DR strategy should you choose?

1) Backup and Restore
2) Pilot Light
3) Warm Standby
4) Multi-Site

A

3) Warm Standby

306
Q

You have an on-premises Oracle database that you want to migrate to AWS, specifically to Amazon Aurora. How would you do the migration?

1) Use AWS Schema Conversion Tool (AWS SCT) to convert the database schema, then use AWS Database Migration Service (AWS DMS) to migrate the data
2) Use AWS Database Migration Service (AWS DMS) to convert the database schema, then use AWS Schema Conversion Tool (AWS SCT) to migrate the data

A

1) Use AWS Schema Conversion Tool (AWS SCT) to convert the database schema, then use AWS Database Migration Service (AWS DMS) to migrate the data

307
Q

AWS DataSync supports the following locations, EXCEPT ………………..

1) Amazon S3
2) Amazon EBS
3) Amazon EFS
4) Amazon FSx for Windows File Server

A

2) Amazon EBS

308
Q

You are running many resources in AWS such as EC2 instances, EBS volumes, DynamoDB tables… You want an easy way to manage backups across all these AWS services from a single place. Which AWS offering makes this process easy?

1) Amazon S3
2) AWS Storage Gateway
3) AWS Backup
4) EC2 Snapshots

A

3) AWS Backup

AWS Backup enables you to centralize and automate data protection across AWS services. It helps you support your regulatory compliance or business policies for data protection.

309
Q

A company planning to migrate its existing websites, applications, servers, virtual machines, and data to AWS. They want to do a lift-and-shift migration with minimum downtime and reduced costs. Which AWS service can help in this scenario?

1) AWS Database Migration Service
2) AWS Application Migration Service
3) AWS Backup
4) AWS Schema Conversion Tool

A

2) AWS Application Migration Service

310
Q

A company is using VMware on its on-premises data center to manage its infrastructure. There is a requirement to extend their data center and infrastructure to AWS but keep using the technology stack they are using which is VMware. Which AWS service can they use?

1) VMware Cloud on AWS
2) AWS DataSync
3) AWS Application Migration Service
4) AWS Application Discovery Service

A

1) VMware Cloud on AWS

311
Q

A company is using RDS for MySQL as their main database but, lately they have been facing issues in managing the database, performance issues, and the scalability. And they have decided to use Aurora for MySQL instead for better performance, less complexity and less administrative tasks required. What is the best way and most cost-effective way to migrate from RDS for MySQL to Aurora for MySQL?

1) Raise an AWS support ticket to do the migration as it is not supported
2) Create a database dump from RDS from MySQL, store it in an S3 bucket, then restore it to Aurora for MySQL
3) You can not migrate directly to Aurora for MySQL, you have to create a custom application to insert the data manually
4) Create a snapshot from RDS for MySQL and restore it to Aurora for MySQL

A

4) Create a snapshot from RDS for MySQL and restore it to Aurora for MySQL

312
Q

Which AWS service can you use to automate the backup across different AWS services such as RDS, DynamoDB, Aurora, and EFS file systems, and EBS volumes?

1) Amazon S3 Lifecycle Policy
2) AWS DataSync
3) AWS Backup
4) Amazon Glacier

A

3) AWS Backup

313
Q

You need to move hundreds of Terabytes into Amazon S3, then process the data using a fleet of EC2 instances. You have a 1 Gbit/s broadband. You would like to move the data faster and possibly processing it while in transit. What do you recommend?

1) Use your network
2) Use Snowcone
3) Use AWS Data Migration
4) Use Snowball Edge

A

4) Use Snowball Edge

Snowball Edge is the right answer as it comes with computing capabilities and allows you to pre-process the data while it’s being moved into Snowball.

314
Q

You want to expose virtually infinite storage for your tape backups. You want to keep the same software you’re using and want an iSCSI compatible interface. What do you use?

1) AWS Snowball
2) AWS Storage Gateway - Tape Gateway
3) AWS Storage Gateway - Volume Gateway
4) AWS Storage Gateway - S3 File Gateway

A

2) AWS Storage Gateway - Tape Gateway

315
Q

Your EC2 Windows Servers need to share some data by having a Network File System mounted on them which respects the Windows security mechanisms and has integration with Microsoft Active Directory. What do you recommend?

1) Amazon FSx for Window (File Server)
2) Amazon EFS
3) Amazon FSx for Lustre
4) S3 FIle Gateway

A

1) Amazon FSx for Window (File Server)

316
Q

You have hundreds of Terabytes that you want to migrate to AWS S3 as soon as possible. You tried to use your network bandwidth and it will take around 3 weeks to complete the upload process. What is the recommended approach to using in this situation?

1) AWS Storage Gateway - Volume Gateway
2) S3 Multi-part Upload
3) AWS Snowball Edge
4) AWS Data Migration Service

A

3) AWS Snowball Edge

317
Q

You have a large dataset stored in S3 that you want to access from on-premises servers using the NFS or SMB protocol. Also, you want to authenticate access to these files through on-premises Microsoft AD. What would you use?

1) AWS Storage Gateway - Volume Gateway
2) AWS Storage Gateway - S3 File Gateway
3) AWS Storage Gateway - Tape Gateway
4) AWS Data Migration Service

A

2) AWS Storage Gateway - S3 File Gateway

318
Q

You are planning to migrate your company’s infrastructure from on-premises to AWS Cloud. You have an on-premises Microsoft Windows File Server that you want to migrate. What is the most suitable AWS service you can use?

1) Amazon FSx for Windows (File Server
2) AWS Storage Gateway - S3 File Gateway
3) AWS Managed Microsoft AD

A

1) Amazon FSx for Windows (File Server

319
Q

You would like to have a distributed POSIX compliant file system that will allow you to maximize the IOPS in order to perform some High-Performance Computing (HPC) and genomics computational research. This file system has to easily scale to millions of IOPS. What do you recommend?

1) EFS with Maz. IO enabled
2) Amazon FSx for Lustre
3) Amazon S3 mounted on the EC2 instances
4) EC2 Instance Store

A

2) Amazon FSx for Lustre

320
Q

Which deployment option in the FSx file system provides you with long-term storage that’s replicated within AZ?

1) Scratch File System
2) Persistent File System

A

2) Persistent File System

321
Q

Which of the following protocols is NOT supported by AWS Transfer Family?

1) File Transfer Protocol (FTP)
2) File Transfer Protocol over SSL (FTPS)
3) Transport Layer Security (TLS)
4) Secure File Transfer Protocol (SFTP)

A

3) Transport Layer Security (TLS)

AWS Transfer Family is a managed service for file transfers into and out of S3 or EFS using the FTP protocol, thus TLS is not supported.

322
Q

A company uses a lot of files and data which is stored in an FSx for Windows File Server storage on AWS. Those files are currently used by the resources hosted on AWS. There’s a requirement for those files to be accessed on-premises with low latency. Which AWS service can help you achieve this?

1) S3 File Gateway
2) FSx for Windows File Server On-Premises
3) FSx File Gateway
4) Volume Gateway

A

3) FSx File Gateway

323
Q

A Solutions Architect is working on planning the migration of a startup company from on-premises to AWS. Currently, their infrastructure consists of many servers and 30 TB of data hosted on a shared NFS storage. He has decided to use Amazon S3 to host the data. Which AWS service can efficiently migrate the data from on-premises to S3?

1) AWS Storage Tape Gateway
2) Amazon EBS
3) AWS Transfer Family
4) AWS DataSync

A

4) AWS DataSync

324
Q

Which AWS service is best suited to migrate a large amount of data from an S3 bucket to an EFS file system?

1) AWS Snowball
2) AWS DataSync
3) AWS Transfer Family
4) AWS Backup

A

2) AWS DataSync

325
Q

A Machine Learning company is working on a set of datasets that are hosted on S3 buckets. The company decided to release those datasets to the public to be useful for others in their research, but they don’t want to configure the S3 bucket to be public. And those datasets should be exposed over the FTP protocol. What can they do to do the requirement efficiently and with the least effort?

1) Use AWS Transfer Family
2) Create an EC2 Instance with an FTP server installed then copy the data from S3 to the EC2 instance
3) Use AWS Storage Gateway
4) Copy the data from S3 to an EFS file system, then expose them over the FTP protocol

A

1) Use AWS Transfer Family

326
Q

Amazon FSx for NetApp ONTAP is compatible with the following protocols, EXCEPT ………………

1) NFS
2) SMB
3) FTP
4) iSCSI

A

3) FTP

327
Q

Which AWS service is best suited when migrating from an on-premises ZFS file system to AWS?

1) Amazon FSx for OpenZFS
2) Amazon FSx for NetApp ONTAP
3) Amazon FSx for Windows File Server
4) Amazon FSx for Lustre

A

1) Amazon FSx for OpenZFS

328
Q

A company is running Amazon S3 File Gateway to host their data on S3 buckets and is able to mount them on-premises using SMB. The data currently is hosted on S3 Standard storage class and there is a requirement to reduce the costs for S3. So, they have decided to migrate some of those data to S3 Glacier. What is the most efficient way they can use to move the data to S3 Glacier automatically?

1) Create a Lambda function to migrate data to S3 Glacier and periodically trigger it every day using Amazon EventBridge
2) Use S3 Batch Operations to loop through S3 files and move them to S3 Glacier every day
3) Use S3 Lifecycle Policy
4) Use AWS DataSync to replicate data to S3 Glacier every day
5) Configure S3 File Gateway to send the data to S3 Glacier directly

A

3) Use S3 Lifecycle Policy

329
Q

You have on-premises sensitive files and documents that you want to regularly synchronize to AWS to keep another copy. Which AWS service can help you with that?

1) AWS Database Migration Service
2) Amazon EFS
3) AWS DataSync

A

3) AWS DataSync

AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS Storage services, as well as between AWS Storage services.

330
Q

Which service is meant to help you plan migrations of your applications to AWS through the collection of usage and configuration data from on-premises servers via either agentless or agent-based collectors?

1) AWS Simple Notification Service
2) AWS Database Migration Service
3) AWS Application Discovery Service
4) AWS Migration Tracking Service

A

3) AWS Application Discovery Service

This service helps users plan migrations to AWS via the collection of usage and configuration data from on-premises servers.

331
Q

What does this CIDR 10.0.4.0/28 correspond to?

1) 10.0.4.0 to 10.0.4.15
2) 10.0.4.0 to 10.0.32.0
3) 10.0.4.0 to 10.0.4.28
4) 10.0.0.0 to 10.0.16.00

A

1) 10.0.4.0 to 10.0.4.15

/28 means 16 IPs (=2^(32-28) = 2^4), means only the last digit can change.

332
Q

You have a corporate network of size 10.0.0.0/8 and a satellite office of size 192.168.0.0/16. Which CIDR is acceptable for your AWS VPC if you plan on connecting your networks later on?

1) 172.16.0.0/12
2) 172.16.0.0/16
3) 10.0.16.0/16
4) 192.168.4.0/18

A

2) 172.16.0.0/16

CIDR not should overlap, and the max CIDR size in AWS is /16.

333
Q

You plan on creating a subnet and want it to have at least capacity for 28 EC2 instances. What’s the minimum size you need to have for your subnet?

1) /28
2) /27
3) /26
4) /25

A

3) /26

334
Q

Security Groups operate at the …………….. level while NACLs operate at the …………….. level.

1) EC2 instance, Subnet
2) Subnet, EC2 instance

A

1) EC2 instance, Subnet

335
Q

You have attached an Internet Gateway to your VPC, but your EC2 instances still don’t have access to the internet. What is NOT a possible issue?

1) Route Tables are missing entries
2) The EC2 instances don’t have public IPs
3) The Security Group does not allow traffic in
4) The NACL does not allow network traffic out

A

3) The Security Group does not allow traffic in

Security groups are stateful and if traffic can go out, then it can go back in.

336
Q

You would like to provide Internet access to your EC2 instances in private subnets with IPv4 while making sure this solution requires the least amount of administration and scales seamlessly. What should you use?

1) NAT Instances with Source/Destination Check flag off
2) Egress Only Internet Gateway
3) NAT Gateway

A

3) NAT Gateway

337
Q

VPC Peering has been enabled between VPC A and VPC B, and the route tables have been updated for VPC A. But, the EC2 instances cannot communicate. What is the likely issue?

1) Check the NACL
2) Check the Route Tables in VPC B
3) Check the EC2 instance attached Security Groups
4) Check if DNS Resolution is enabled

A

2) Check the Route Tables in VPC B

Route tables must be updated in both VPCs that are peered.

338
Q

You have set up a Direct Connect connection between your corporate data center and your VPC A in your AWS account. You need to access VPC B in another AWS region from your corporate datacenter as well. What should you do?

1) Enable VPC Peering
2) Use a Customer Gateway
3) Use a Direct Connect Gateway
4) Set up a NAT Gateway

A

3) Use a Direct Connect Gateway

This is the main use case of Direct Connect Gateways.

339
Q

When using VPC Endpoints, what are the only two AWS services that have a Gateway Endpoint available?

1) Amazon S3 & Amazon SQS
2) Amazon SQS & DynamoDB
3) Amazon S3 & DynamoDB

A

3) Amazon S3 & DynamoDB

These two services have a VPC Gateway Endpoint (remember it), all the other ones have an Interface endpoint (powered by Private Link - means a private IP).

340
Q

AWS reserves 5 IP addresses each time you create a new subnet in a VPC. When you create a subnet with CIDR 10.0.0.0/24, the following IP addresses are reserved, EXCEPT ………………..

1) 10.0.0.1
2) 10.0.0.2
3) 10.0.0.3
4) 10.0.0.4

A

4) 10.0.0.4

341
Q

You have 3 VPCs A, B, and C. You want to establish a VPC Peering connection between all the 3 VPCs. What should you do?

1) As VPC Peering supports Transitive Peers, so you need to establish 2 VPC Peering connections (A-B, B-C)
2) Establish 3 VPC Peering connections (A-B, A-C, B-C)

A

2) Establish 3 VPC Peering connections (A-B, A-C, B-C)

342
Q

How can you capture information about IP traffic inside your VPCs?

1) Enable VPC Flow Logs
2) Enable VPC Traffic Mirroring
3) Enable CloudWatch Traffic Logs

A

1) Enable VPC Flow Logs

VPC Flow Logs is a VPC feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC.

343
Q

If you want a 500 Mbps Direct Connect connection between your corporate datacenter to AWS, you would choose a ……………… connection.

1) Dedicated
2) Hosted

A

2) Hosted

Hosted Direct Connect connection supports 50Mbps, 500Mbps, up to 10Gbps.

344
Q

When you set up an AWS Site-to-Site VPN connection between your corporate on-premises datacenter and VPCs in AWS Cloud, what are the two major components you want to configure for this connection?

1) Customer Gateway and NAT Gateway
2) Internet Gateway and Customer Gateway
3) Virtual Private Gateway and Internet Gateway
4) Virtual Private Gateway and Customer Gateway

A

4) Virtual Private Gateway and Customer Gateway

345
Q

Your company has several on-premises sites across the USA. These sites are currently linked using private connections, but your private connections provider has been recently quite unstable, making your IT architecture partially offline. You would like to create a backup connection that will use the public Internet to link your on-premises sites, that you can failover in case of issues with your provider. What do you recommend?

1) VPC Peering
2) AWS VPN CloudHub
3) Direct Connect
4) AWS PrivateLink

A

2) AWS VPN CloudHub

AWS VPN CloudHub allows you to securely communicate with multiple sites using AWS VPN. It operates on a simple hub-and-spoke model that you can use with or without a VPC.

346
Q

You need to set up a dedicated connection between your on-premises corporate datacenter and AWS Cloud. This connection must be private, consistent, and traffic must not travel through the Internet. Which AWS service should you use?

1) Site-to-Site VPN
2) AWS PrivateLink
3) AWS Direct Connect
4) Amazon EventBridge

A

3) AWS Direct Connect

347
Q

Using a Direct Connect connection, you can access both public and private AWS resources.

True
False

A

True

348
Q

You want to scale up an AWS Site-to-Site VPN connection throughput, established between your on-premises data and AWS Cloud, beyond a single IPsec tunnel’s maximum limit of 1.25 Gbps. What should you do?

1) Use 2 Virtual Private Gateways
2) Use Direct Connect Gateway
3) Use Transit Gateway

A

3) Use Transit Gateway

349
Q

You have a VPC in your AWS account that runs in a dual-stack mode. You are continuously trying to launch an EC2 instance, but it fails. After further investigation, you have found that you are no longer have IPv4 addresses available. What should you do?

1) Modify your VPC to run in IPv6 mode onnly
2) Modify your VPC to run in IPv4 mode only
3) Add an additional IPv4 VIDR to your VPC

A

3) Add an additional IPv4 VIDR to your VPC

350
Q

A web application backend is hosted on EC2 instances in private subnets fronted by an Application Load Balancer in public subnets. There is a requirement to give some of the developers access to the backend EC2 instances but without exposing the backend EC2 instances to the Internet. You have created a bastion host EC2 instance in the public subnet and configured the backend EC2 instances Security Group to allow traffic from the bastion host. Which of the following is the best configuration for bastion host Security Group to make it secure?

1) Allow traffic only on port 80 from the company’s public CIDR
2) Allow traffic only on port 22 from the company’s public CIDR
3) Allow traffic only on port 22 from the company’s private CIDR
4) Allow traffic only on port 80 from the company’s private CIDR

A

2) Allow traffic only on port 22 from the company’s public CIDR

351
Q

A company has set up a Direct Connect connection between their corporate data center to AWS. There is a requirement to prepare a cost-effective secure backup connection in case there are issues with this Direct Connect connection. What is the most cost effective and secure solution you recommend?

1) Setup another Direct Connect connection to the same AWS region
2) Setup another Direct Connect connection to a different AWS region
3) Setup a Site-to-SiteVPN connection as a backup

A

3) Setup a Site-to-SiteVPN connection as a backup

352
Q

Which AWS service allows you to protect and control traffic in your VPC from layer 3 to layer 7?

1) AWS Network Firewall
2) Amazon Guard Duty
3) Amazon Inspector
4) Amazon Shield

A

1) AWS Network Firewall

353
Q

A web application hosted on a fleet of EC2 instances managed by an Auto Scaling Group. You are exposing this application through an Application Load Balancer. Both the EC2 instances and the ALB are deployed on a VPC with the following CIDR 192.168.0.0/18. How do you configure the EC2 instances’ security group to ensure only the ALB can access them on port 80?

1) Add an Inbound Rule with port 80 and 0.0.0.0/0 as the source
2) Add an Inbound Rule with port 80 and 192.168.0.0/18 as the source
3) Add an Inbound Rule with port 80 and ALB’s Security Group as the source
4) Load SSL certificate on the ALB

A

3) Add an Inbound Rule with port 80 and ALB’s Security Group as the source

This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule and many questions at the exam rely on it. Make sure you fully master the concepts behind it!

354
Q

You have just terminated an EC2 instance in us-east-1a, and its attached EBS volume is now available. Your teammate tries to attach it to an EC2 instance in us-east-1b but he can’t. What is a possible cause for this?

1) He’s missing IAm permissions
2) EBS volumes are locked to AWS Region
3) EBS volumes are locked to AZ

A

3) EBS volumes are locked to AZ

EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZs using EBS Snapshots.

355
Q

You can use an AMI in N.Virginia Region us-east-1 to launch an EC2 instance in any AWS Region.

True
False

A

False

AMIs are built for a specific AWS Region, they’re unique for each AWS Region. You can’t launch an EC2 instance using an AMI in another AWS Region, but you can copy the AMI to the target AWS Region and then use it to create your EC2 instances.

355
Q

You have launched an EC2 instance with two EBS volumes, Root volume type and the other EBS volume type to store the data. A month later you are planning to terminate the EC2 instance. What’s the default behavior that will happen to each EBS volume?

1) Both the root volume type and the EBS volume type will be deleted
2) The root volume type will be deleted and the EBS volume type will not be deleted
3) The root volume type will not be deleted and the EBS volume type will be deleted
4) Both the root volume type and the EBS volume type will not be deleted

A

2) The root volume type will be deleted and the EBS volume type will not be deleted

By default, the Root volume type will be deleted as its “Delete On Termination” attribute checked by default. Any other EBS volume types will not be deleted as its “Delete On Termination” attribute disabled by default.

356
Q

Which of the following EBS volume types can be used as boot volumes when you create EC2 instances?

1) gp2, gp3, st1, sc1
2) gp2, gp3, io1, io2
3) io1, io2, st1, sc1

A

2) gp2, gp3, io1, io2

When creating EC2 instances, you can only use the following EBS volume types as boot volumes: gp2, gp3, io1, io2, and Magnetic (Standard).

357
Q

What is EBS Multi-Attach?

1) Attach the same EBS volume to multiple EC2 instances in multiple AZs
2) Attach multiple EBS volumes in the same AZ to the same EC2 instance
3) Attach the same EBS volume to multiple EC2 instances in the same AZ
4) Attach multiple EBS volumes in multiple AZs to the same EC2 instance

A

3) Attach the same EBS volume to multiple EC2 instances in the same AZ

Using EBS Multi-Attach, you can attach the same EBS volume to multiple EC2 instances in the same AZ. Each EC2 instance has full read/write permissions.

358
Q

You would like to encrypt an unencrypted EBS volume attached to your EC2 instance. What should you do?

1) Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume
2) Select your EBS volume, choose Edit Attributes, then tick the Encrypt using KMS option
3) Create a new encrypted EBS volume, ten copy data from your unencrypted EBS volume to the new EBS volume
4) Submit a request to AWS Support to encrypt your EBS volume

A

1) Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume

359
Q

You have a fleet of EC2 instances distributes across AZs that process a large data set. What do you recommend to make the same data to be accessible as an NFS drive to all of your EC2 instances?

1) Use EBS
2) Use EFS
3) Use an Instance Store

A

2) Use EFS

EFS is a network file system (NFS) that allows you to mount the same file system on EC2 instances that are in different AZs.

360
Q

You would like to have a high-performance local cache for your application hosted on an EC2 instance. You don’t mind losing the cache upon the termination of your EC2 instance. Which storage mechanism do you recommend as a Solutions Architect?

1) EBS
2) EFS
3) Instance Store

A

3) Instance Store

EC2 Instance Store provides the best disk I/O performance.

361
Q

You are running a high-performance database that requires an IOPS of 310,000 for its underlying storage. What do you recommend?

1) Use an EBS gp2 drive
2) Use an EBS io1 drive
3) Use an EC2 Instance Store
4) Use an EBS io2 Block Express drive

A

3) Use an EC2 Instance Store

You can run a database on an EC2 instance that uses an Instance Store, but you’ll have a problem that the data will be lost if the EC2 instance is stopped (it can be restarted without problems). One solution is that you can set up a replication mechanism on another EC2 instance with an Instance Store to have a standby copy. Another solution is to set up backup mechanisms for your data. It’s all up to you how you want to set up your architecture to validate your requirements. In this use case, it’s around IOPS, so we have to choose an EC2 Instance Store.

362
Q

Scaling an EC2 instance from r4.large to r4.4xlarge is called …………………

1) Horizontal Scalability
2) Vertical Scalability

A

2) Vertical Scalability

363
Q

Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called …………………

1) Horizontal Scalability
2) Vertical Scalability

A

1) Horizontal Scalability

364
Q

Elastic Load Balancers provide a …………………..

1) static IPv4 we can use in our application
2) static DNS name we can use in our application
3) static IPv6 we can use in our application

A

2) static DNS name we can use in our application

Only Network Load Balancer provides both static DNS name and static IP. While, Application Load Balancer provides a static DNS name but it does NOT provide a static IP. The reason being that AWS wants your Elastic Load Balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes.

365
Q

You are running a website on 10 EC2 instances fronted by an Elastic Load Balancer. Your users are complaining about the fact that the website always asks them to re-authenticate when they are moving between website pages. You are puzzled because it’s working just fine on your machine and in the Dev environment with 1 EC2 instance. What could be the reason?

1) You website must have an issue when hosted on multiple EC2 instances
2) The EC2 instances log out users as they can’t see their IP addresses, instead they receive ELB IP addresses
3) The Elastic Load Balancer does not have Sticky Sessions enabled

A

3) The Elastic Load Balancer does not have Sticky Sessions enabled

ELB Sticky Session feature ensures traffic for the same client is always redirected to the same target (e.g., EC2 instance). This helps that the client does not lose his session data.

366
Q

You are using an Application Load Balancer to distribute traffic to your website hosted on EC2 instances. It turns out that your website only sees traffic coming from private IPv4 addresses which are in fact your Application Load Balancer’s IP addresses. What should you do to get the IP address of clients connected to your website?

1) Modify your website’s frontend so that users send their IP in every request
2) Modify your website’s backend to get the client IP address from the X-Forwarded-For header
3) Modify your website’s backend to get the client IP address from the X-Forwarded-Port header
4) Modify your website’s backend to get the client IP address from the X-Forwarded-Proto header

A

2) Modify your website’s backend to get the client IP address from the X-Forwarded-For header

When using an Application Load Balancer to distribute traffic to your EC2 instances, the IP address you’ll receive requests from will be the ALB’s private IP addresses. To get the client’s IP address, ALB adds an additional header called “X-Forwarded-For” contains the client’s IP address.

367
Q

You hosted an application on a set of EC2 instances fronted by an Elastic Load Balancer. A week later, users begin complaining that sometimes the application just doesn’t work. You investigate the issue and found that some EC2 instances crash from time to time. What should you do to protect users from connecting to the EC2 instances that are crashing?

1) Enable ELB Health Checks
2) Enable ELB Stickiness
3) Enable SSL Termination
4) Enable Cross-Zone Load Balancing

A

1) Enable ELB Health Checks

When you enable ELB Health Checks, your ELB won’t send traffic to unhealthy (crashed) EC2 instances.

368
Q

You are working as a Solutions Architect for a company and you are required to design an architecture for a high-performance, low-latency application that will receive millions of requests per second. Which type of Elastic Load Balancer should you choose?

1) Application Load Balancer
2) Classic Load Balancer
3) Network Load Balancer

A

3) Network Load Balancer

Network Load Balancer provides the highest performance and lowest latency if your application needs it.

369
Q

Application Load Balancers support the following protocols, EXCEPT:

1) HTTP
2) HTTPS
3) TCP
4) WebSocket

A

3) TCP

Application Load Balancers support HTTP, HTTPS and WebSocket

370
Q

Application Load Balancers can route traffic to different Target Groups based on the following, EXCEPT:

1) Client’s Location (Geography)
2) Hostname
3) Request URL Path
4) Source IP Address

A

1) Client’s Location (Geography)

ALBs can route traffic to different Target Groups based on URL Path, Hostname, HTTP Headers, and Query Strings.

371
Q

Registered targets in a Target Groups for an Application Load Balancer can be one of the following, EXCEPT:

1) EC2 Instances
2) Network Load Balancer
3) Private IP Addresses
4) Lambda Functions

A

2) Network Load Balancer

372
Q

For compliance purposes, you would like to expose a fixed static IP address to your end-users so that they can write firewall rules that will be stable and approved by regulators. What type of Elastic Load Balancer would you choose?

1) Application Load Balancer with an Elastic IP attached to it
2) Network Load Balancer
3) Classic Load Balancer

A

2) Network Load Balancer

Network Load Balancer has one static IP address per AZ and you can attach an Elastic IP address to it. Application Load Balancers and Classic Load Balancers have a static DNS name.

373
Q

You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name?

1) AWSALBAPP
2) APPUSERC
3) AWSALBTG
4) AWSALB

A

2) APPUSERC

The following cookie names are reserved by the ELB (AWSALB, AWSALBAPP, AWSALBTG).

374
Q

You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1. You have 2 EC2 instances in us-east-1b AZ and 5 EC2 instances in us-east-1e AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem?

1) Enable Cross-Zone Load Balancing
2) Enable Sticky Sessions
3) Enalbe ELB Health Checks
4) Enable SSL Termination

A

1) Enable Cross-Zone Load Balancing

375
Q
A
376
Q

Which feature in both Application Load Balancers and Network Load Balancers allows you to load multiple SSL certificates on one listener?

1) TLS Termination
2) Server Name Indication (SNI)
3) SSL Security Policies
4) Host Headers

A

2) Server Name Indication (SNI)

377
Q

You have an Application Load Balancer that is configured to redirect traffic to 3 Target Groups based on the following hostnames: users.example.com, api.external.example.com, and checkout.example.com. You would like to configure HTTPS for each of these hostnames. How do you configure the ALB to make this work?

1) Use an HTTP to HTTPS redirect rule
2) Use a security group SSL certificate
3) Use Server Name Indication (SNI)

A

3) Use Server Name Indication (SNI)

Server Name Indication (SNI) allows you to expose multiple HTTPS applications each with its own SSL certificate on the same listener.

378
Q

You have an application hosted on a set of EC2 instances managed by an Auto Scaling Group that you configured both desired and maximum capacity to 3. Also, you have created a CloudWatch Alarm that is configured to scale out your ASG when CPU Utilization reaches 60%. Your application suddenly received huge traffic and is now running at 80% CPU Utilization. What will happen?

1) Nothing
2) The desired capacity will go up to 4 and the maximum capacity will stay at 3
3) The desired capacity will go up to 4 and the maximum capacity will stay at 4

A

1) Nothing

The Auto Scaling Group can’t go over the maximum capacity (you configured) during scale-out events.

379
Q

You have an Auto Scaling Group fronted by an Application Load Balancer. You have configured the ASG to use ALB Health Checks, then one EC2 instance has just been reported unhealthy. What will happen to the EC2 instance?

1) The ASG will keep the instance running and re-start the application
2) The ASG will detach the EC2 instance and leave it running
3) The ASG will terminate the EC2 instance

A

3) The ASG will terminate the EC2 instance

You can configure the Auto Scaling Group to determine the EC2 instances’ health based on Application Load Balancer Health Checks instead of EC2 Status Checks (default). When an EC2 instance fails the ALB Health Checks, it is marked unhealthy and will be terminated while the ASG launches a new EC2 instance.

380
Q

Your boss asked you to scale your Auto Scaling Group based on the number of requests per minute your application makes to your database. What should you do?

1) Create a CloudWatch custom metric then create a CloudWatch Alarm on this metric to scale your ASG
2) You politely tell him it’s impossible
3) Enable Detailed Monitoring then create a CloudWatch Alarm to scale your ASG

A

1) Create a CloudWatch custom metric then create a CloudWatch Alarm on this metric to scale your ASG

There’s no CloudWatch Metric for “requests per minute” for backend-to-database connections. You need to create a CloudWatch Custom Metric, then create a CloudWatch Alarm.

381
Q

An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, you manually scale the ASG and you would like to define a Scaling Policy that will ensure the average number of connections to your EC2 instances is around 1000. Which Scaling Policy should you use?

1) Simple Scaling Policy
2) Step Scaling Policy
3) Target Tracking Policy
4) Schedule Scaling Policy

A

3) Target Tracking Policy

382
Q

You have an ASG and a Network Load Balancer. The application on your ASG supports the HTTP protocol and is integrated with the Load Balancer health checks. You are currently using the TCP health checks. You would like to migrate to using HTTP health checks, what do you do?

1) Migrate to an Application Load Balancer
2) Migrate the health check to HTTP

A

2) Migrate the health check to HTTP

The NLB supports HTTP health checks as well as TCP and HTTPS

383
Q

You have a website hosted in EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. Currently, the website is served over HTTP, and you have been tasked to configure it to use HTTPS. You have created a certificate in ACM and attached it to the Application Load Balancer. What you can do to force users to access the website using HTTPS instead of HTTP?

1) Send an email to all customers to use HTTPS instead of HTTP
2) Configure the ALB to redirect HTTP to HTTPS
3) Configure the DNS record to redirect HTTP to HTTPS

A

2) Configure the ALB to redirect HTTP to HTTPS

384
Q

You have purchased mycoolcompany.com on Amazon Route 53 Registrar and would like the domain to point to your Elastic Load Balancer my-elb-1234567890.us-west-2.elb.amazonaws.com. Which Route 53 Record type must you use here?

1) CNAME
2) Alias

A

2) Alias

385
Q

You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment. This allows you to monitor for CloudWatch metrics and ensuring that there’re no bugs exist with your new environment. Which Route 53 Record type allows you to do so?

1) Simple
2) Weighted
3) Latency
4) Failover

A

2) Weighted

Weighted Routing Policy allows you to redirect part of the traffic based on weight (e.g., percentage). It’s a common use case to send part of traffic to a new version of your application.

386
Q

You have updated a Route 53 Record’s myapp.mydomain.com value to point to a new Elastic Load Balancer, but it looks like users are still redirected to the old ELB. What is a possible cause for this behavior?

1) Because of the Alias record
2) Because of the CNAME record
3) Because of the TTL
4) Because of Route 53 Health Checks

A

3) Because of the TTL

Each DNS record has a TTL (Time To Live) which orders clients for how long to cache these values and not overload the DNS Resolver with DNS requests. The TTL value should be set to strike a balance between how long the value should be cached vs. how many requests should go to the DNS Resolver.

387
Q

You have an application that’s hosted in two different AWS Regions us-west-1 and eu-west-2. You want your users to get the best possible user experience by minimizing the response time from application servers to your users. Which Route 53 Routing Policy should you choose?

1) Multi Value
2) Weighted
3) Latency
4) Geolocation

A

3) Latency

Latency Routing Policy will evaluate the latency between your users and AWS Regions, and help them get a DNS response that will minimize their latency (e.g. response time)

388
Q

You have a legal requirement that people in any country but France should NOT be able to access your website. Which Route 53 Routing Policy helps you in achieving this?

1) Multi Value
2) Weighted
3) Latency
4) Geolocation

A

4) Geolocation

389
Q

You have purchased a domain on GoDaddy and would like to use Route 53 as the DNS Service Provider. What should you do to make this work?

1) Request for a domain transfer
2) Create a Private Hosted Zone and update the 3rd party Registrar NS records
3) Create a Public Hosted Zone and update the Route 53 NS records
4) Create a Public Hosted Zone and update the 3rd party Registrar NS records

A

4) Create a Public Hosted Zone and update the 3rd party Registrar NS records

Public Hosted Zones are meant to be used for people requesting your website through the Internet. Finally, NS records must be updated on the 3rd party Registrar.

390
Q

Which of the following are NOT valid Route 53 Health Checks?

1) Health Checks that monitor SQS Queue
2) Health Checks that monitor an Endpoint
3) Health Checks that monitor other Health Checks
4) Health Checks that monitor CloudWatch Alarms

A

1) Health Checks that monitor SQS Queue

391
Q

Your website TriangleSunglasses.com is hosted on a fleet of EC2 instances managed by an Auto Scaling Group and fronted by an Application Load Balancer. Your ASG has been configured to scale on-demand based on the traffic going to your website. To reduce costs, you have configured the ASG to scale based on the traffic going through the ALB. To make the solution highly available, you have updated your ASG and set the minimum capacity to 2. How can you further reduce the costs while respecting the requirements?

1) Remove the ALB and use an Elastic IP instead
2) Reserve 2 EC2 instances
3) Reduce the minimum capacity to 1
4) Reduce the minimum capacity to 0

A

2) Reserve 2 EC2 instances

This is the way to save further costs as we will run 2 EC2 instances no matter what.

392
Q

Which of the following will NOT help us while designing a STATELESS application tier?

1) Store session data in Amazon RDS
2) Store session data in Amazon ElastiCache
3) Store session data in the client HTTP cookies
4) Store session data on EBS volumes

A

4) Store session data on EBS volumes

EBS volumes are created in a specific AZ and can only be attached to one EC2 instance at a time.

393
Q

You want to install software updates on 100s of Linux EC2 instances that you manage. You want to store these updates on shared storage which should be dynamically loaded on the EC2 instances and shouldn’t require heavy operations. What do you suggest?

1) Store the software updates on EBS and sync them using data replication software from one master in each AZ
2) Store the software updates on EFS and mount EFS as a network drive at startup
3) Package the software updates as an EBS snapshot and create EBS volumes for each new software update
4) Store the software updates on Amazon RDS

A

2) Store the software updates on EFS and mount EFS as a network drive at startup

EFS is a network file system (NFS) that allows you to mount the same file system to 100s of EC2 instances. Storing software updates on an EFS allows each EC2 instance to access them.

394
Q

As a Solutions Architect, you’re planning to migrate a complex ERP software suite to AWS Cloud. You’re planning to host the software on a set of Linux EC2 instances managed by an Auto Scaling Group. The software traditionally takes over an hour to set up on a Linux machine. How do you recommend you speed up the installation process when there’s a scale-out event?

1) Use a Golden AMI
2) Bootstrap using EC2 User Data
3) Store the application in Amazon RDS
4) Retrieve the application setup files from EFS

A

1) Use a Golden AMI

Golden AMI is an image that contains all your software installed and configured so that future EC2 instances can boot up quickly from that AMI.

395
Q

You’re developing an application and would like to deploy it to Elastic Beanstalk with minimal cost. You should run it in ………………

1) Single Instance Mode
2) High availability Mode

A

1) Single Instance Mode

The question mentions that you’re still in the development stage and you want to reduce costs. Single Instance Mode will create one EC2 instance and one Elastic IP.

396
Q

You’re deploying your application to an Elastic Beanstalk environment but you notice that the deployment process is painfully slow. After reviewing the logs, you found that your dependencies are resolved on each EC2 instance each time you deploy. How can you speed up the deployment process with minimal impact?

1) Remove the dependencies in your code
2) Place the dependencies in Amazon EFS
3) Create a Golden AMI that contains the dependencies and use that image to launch the EC2 instances

A

3) Create a Golden AMI that contains the dependencies and use that image to launch the EC2 instances

Golden AMI is an image that contains all your software, dependencies, and configurations, so that future EC2 instances can boot up quickly from that AMI.

397
Q

Which database helps you store relational datasets, with SQL language compatibility and the capability of processing transactions such as insert, update, and delete?

1) Amazon DocumentDB
2) Amazon RDS
3) Amazon Dynamo
4) Amazon ElastiCache

A

2) Amazon RDS

398
Q

Which AWS service provides you with caching capability that is compatible with Redis API?

1) Amazon RDS
2) Amazon DynamoDB
3) Amazon OpenSearch
4) Amazon ElastiCache

A

4) Amazon ElastiCache

Amazon ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached.

399
Q

You want to migrate an on-premises MongoDB NoSQL database to AWS. You don’t want to manage any database servers, so you want to use a managed NoSQL Serverless database, that provides you with high availability, durability, and reliability, and the capability to take your database global. Which database should you choose?

1) Amazon RDS
2) Amazon DynamoDB
3) Amazon Document
4) Amazon Aurora

A

2) Amazon DynamoDB

400
Q

You are looking to perform Online Transaction Processing (OLTP). You would like to use a database that has built-in auto-scaling capabilities and provides you with the maximum number of replicas for its underlying storage. What AWS service do you recommend?

1) Amazon ElastiCache
2) Amazon Neptune
3) Amazon Aurora
4) Amazon RDS

A

3) Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database. It features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across 3 AZs.

401
Q

As a Solutions Architect, a startup company asked you for help as they are working on an architecture for a social media website where users can be friends with each other, and like each other’s posts. The company plan on performing some complicated queries such as “What are the number of likes on the posts that have been posted by the friends of Mike?”. Which database do you recommend?

1) Amazon RDS
2) Amazon QLDB
3) Amazon Neptune
4) Amazon OpenSearch

A

3) Amazon Neptune

Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets.

Social media- think graph database

402
Q

You have a set of files, 100MB each, that you want to store in a reliable and durable key-value store. Which AWS service do you recommend?

1) Amazon Aurora
2) Amazon S3
3) Amazon DynamoDB
4) Amazon ElastiCache

A

2) Amazon S3

403
Q

A company has an on-premises website that uses ReactJS as its frontend, NodeJS as its backend, and MongoDB for the database. There are some issues with the self-hosted MongoDB database as there is a lot of maintenance required and they don’t have and can’t afford the resources or experience to handle those issues. So, a decision was made to migrate the website to AWS. They have decided to host the frontend ReactJS application in an S3 bucket and the NodeJS backend on a set of EC2 instances. Which AWS service can they use to migrate the MongoDB database that provides them with high scalability and availability without making any code changes?

1) Amazon ElastiCache
2) Amazon DocumentDB
3) Amazon RDS for MongoDB
4) Amazon Neptune

A

2) Amazon DocumentDB

404
Q

A company using a self-hosted on-premises Apache Cassandra database which they want to migrate to AWS. Which AWS service can they use which provides them with a fully managed, highly available, and scalable Apache Cassandra database?

1) Amazon DocumentDB
2) Amazon DynamoDB
3) Amazon Timestream
4) Amazon Keyspace

A

4) Amazon Keyspace

405
Q

An online payment company is using AWS to host its infrastructure. Due to the application’s nature, they have a strict requirement to store an accurate record of financial transactions such as credit and debit transactions. Those transactions must be stored in secured, immutable, encrypted storage which can be cryptographically verified. Which AWS service is best suited for this use case?

1) Amazon DocumentDB
2) Amazon Aurora
3) Amazon QLDB
4) Amazon Neptune

A

3) Amazon QLDB

406
Q

A startup is working on developing a new project to reduce forest fires due to climate change. The startup is developing sensors that will be spread across the entire forest to make some readings such as temperature, humidity, and pressures which will help detect the forest fires before it happens. They are going to have thousands of sensors that are going to store a lot of readings each second. There is a requirement to store those readings and do fast analytics so they can predict if there is a fire. Which AWS service can they use to store those readings?

1) Amazon Timestream
2) Amazon Neptune
3) Amazon S3
4) Amazon ElastiCache

A

1) Amazon Timestream

407
Q

You have an RDS DB instance that’s configured to push its database logs to CloudWatch. You want to create a CloudWatch alarm if there’s an Error found in the logs. How would you do that?

1) Create a scheduled CloudWatch Event that triggers an AWS Lambda every 1 hour, scans the logs, and notify you through SNS topic
2) Create a CloudWatch Logs Metric Filter that filter the logs for the keyword Error, then create a CloudWatch Alarm based on that Metric Filter
3) Create an AWS Config Rule that monitors Error in your database logs and notify you through SNS topic

A

2) Create a CloudWatch Logs Metric Filter that filter the logs for the keyword Error, then create a CloudWatch Alarm based on that Metric Filter

408
Q

You have an application hosted on a fleet of EC2 instances managed by an Auto Scaling Group that you configured its minimum capacity to 2. Also, you have created a CloudWatch Alarm that is configured to scale in your ASG when CPU Utilization is below 60%. Currently, your application runs on 2 EC2 instances and has low traffic and the CloudWatch Alarm is in the ALARM state. What will happen?

1) One EC2 instance will be terminated and the ASG desired and minimum capacity will go to 1
2) The CloudWatch Alarm will remain in ALARM state but never decrease the number of EC2 instances in the ASG
3) The CloudWatch Alarm will be detached from the ASG
4) The CloudWatch Alarm will go in OK state

A

2) The CloudWatch Alarm will remain in ALARM state but never decrease the number of EC2 instances in the ASG

The number of EC2 instances in an ASG can not go below the minimum capacity, even if the CloudWatch alarm would in theory trigger an EC2 instance termination.

409
Q

How would you monitor your EC2 instance memory usage in CloudWatch?

1) Enable EC2 Detailed Monitoring
2) By default, the EC2 instance pushes memory usage to CloudWatch
3) Use the Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch

A

3) Use the Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch

410
Q

You have made a configuration change and would like to evaluate the impact of it on the performance of your application. Which AWS service should you use?

1) Amazon CloudWatch
2) AWS CloudTrail

A

1) Amazon CloudWatch

Amazon CloudWatch is a monitoring service that allows you to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. It is used to monitor your applications’ performance and metrics.

411
Q

Someone has terminated an EC2 instance in your AWS account last week, which was hosting a critical database that contains sensitive data. Which AWS service helps you find who did that and when?

1) CloudWatch Metrics
2) CloudWatch Alarms
3) CloudWatch Events
4) AWS CloudTrail

A

4) AWS CloudTrail

AWS CloudTrail allows you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. It provides the event history of your AWS account activity, audit API calls made through the AWS Management Console, AWS SDKs, AWS CLI. So, the EC2 instance termination API call will appear here. You can use CloudTrail to detect unusual activity in your AWS accounts.

412
Q

You have CloudTrail enabled for your AWS Account in all AWS Regions. What should you use to detect unusual activity in your AWS Account?

1) CloudTrail Data Events
2) CloudTrail Insights
3) CloudTrail Management Events

A

2) CloudTrail Insights

413
Q

One of your teammates terminated an EC2 instance 4 months ago which has critical data. You don’t know who made this so you are going to review all API calls within this period using CloudTrail. You already have CloudTrail set up and configured to send logs to the S3 bucket. What should you do to find out who made this?

1) Use CloudTrail Event History in CloudTrail Console
2) Analyze CloudTrail logs in S3 bucket using Amazon Athena

A

2) Analyze CloudTrail logs in S3 bucket using Amazon Athena

You can use the CloudTrail Console to view the last 90 days of recorded API activity. For events older than 90 days, use Athena to analyze CloudTrail logs stored in S3.

414
Q

You are running a website on a fleet of EC2 instances with OS that has a known vulnerability on port 84. You want to continuously monitor your EC2 instances if they have port 84 exposed. How should you do this?

1) Setup CloudWatch Metrics
2) Setup CloudTrail Trails
3) Setup Config Rules
4) Schedule a CloudWatch Event to trigger a Lambda function to scan your EC2 instances

A

3) Setup Config Rules

415
Q

You would like to evaluate the compliance of your resource’s configurations over time. Which AWS service will you choose?

1) AWS Config
2) Amazon CloudWatch
3) AWS CloudTrail

A

1) AWS Config

416
Q

Someone changed the configuration of a resource and made it non-compliant. Which AWS service is responsible for logging who made modifications to resources?

1) Amazon CloudWatch
2) AWS CloudTrail
3) AWS Config

A

2) AWS CloudTrail

417
Q

You have enabled AWS Config to monitor Security Groups if there’s unrestricted SSH access to any of your EC2 instances. Which AWS Config feature can you use to automatically re-configure your Security Groups to their correct state?

1) AWS Config Remediations
2) AWS Config Rules
3) AWS Config Notifications

A

1) AWS Config Remediations

418
Q

You are running a critical website on a set of EC2 instances with a tightened Security Group that has restricted SSH access. You have enabled AWS Config in your AWS Region and you want to be notified via email when someone modified your EC2 instances’ Security Group. Which AWS Config feature helps you do this?

1) AWS Config Remediations
2) AWS Config Rules
3) AWS Config Notifications

A

3) AWS Config Notifications

419
Q

…………………………. is a CloudWatch feature that allows you to send CloudWatch metrics in near real-time to S3 bucket (through Kinesis Data Firehose) and 3rd party destinations (e.g., Splunk, Datadog, …).

1) CloudWatch Metric Stream
2) CloudWatch Log Stream
3) CloudWatch Metric Filter
4) CloudWatch Log Group

A

1) CloudWatch Metric Stream

420
Q

A DevOps engineer is working for a company and managing its infrastructure and resources on AWS. There was a sudden spike in traffic for the main application for the company which was not normal in this period of the year. The application is hosted on a couple of EC2 instances in private subnets and is fronted by an Application Load Balancer in a public subnet. To detect if this is normal traffic or an attack, the DevOps engineer enabled the VPC Flow Logs for the subnets and stored those logs in CloudWatch Log Group. The DevOps wants to analyze those logs and find out the top IP addresses making requests against the website to check if there is an attack. Which of the following can help the DevOps engineer to analyze those logs?

1) CloudWatch Metric Stream
2) CloudWatch Alarm
3) CloudWatch Contributor Insights
4) CloudWatch Metric Filter

A

3) CloudWatch Contributor Insights

421
Q

A company is developing a Serverless application on AWS using Lambda, DynamoDB, and Cognito. A junior developer joined a few weeks ago and accidentally deleted one of the DynamoDB tables in the dev AWS account which contained important data. The CTO asks you to prevent this from happening again and there must be a notification system to monitor if there is an attempt to make such deletion actions for the DynamoDB tables. What would you do?

1) Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through S3 and send a notification using KMS
2) Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through CloudTrail and send a notification using SNS
3) Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through CloudTrail and send a notification using KMS

A

2) Assign developers to a certain IAM group which prevents deletion of DynamoDB tables. Configure EventBridge to capture any DeleteTable API calls through CloudTrail and send a notification using SNS

422
Q

A company has a running Serverless application on AWS which uses EventBridge as an inter-communication channel between different services within the application. There is a requirement to use the events in the prod environment in the dev environment to make some tests. The tests will be done every 6 months, so the events need to be stored and used later on. What is the most efficient and cost-effective way to store EventBridge events and use them later?

1) Use EventBridge Archive and Replay feature
2) Create a Lambda function to store the EventBridge events in an S3 bucket for later usage
3) Configure EventBridge to store events in a DynamoDB table

A

1) Use EventBridge Archive and Replay feature

423
Q

Question 2: You have a requirement for a highly available and fault-tolerant network architecture that can provide seamless failover between AWS regions. Which AWS service can help you achieve this?

A) Amazon VPC (Virtual Private Cloud)
B) AWS Global Accelerator
C) AWS Direct Connect
D) Amazon Route 53

A

D) Amazon Route 53

Amazon Route 53 is a scalable domain name system (DNS) web service that can provide automatic failover between AWS regions in the event of a service disruption. By configuring DNS failover policies, Route 53 can route traffic to an alternate region if the primary region becomes unavailable, ensuring high availability and fault tolerance.

Both AWS Global Accelerator and Amazon Route 53 could be considered viable options. However, if the emphasis is on DNS-level traffic management with advanced routing policies (like geolocation or latency-based routing), Amazon Route 53 would be the more suitable choice. On the other hand, if the focus is on optimizing performance with rapid failover capabilities and a consistent entry point (static IPs) for global users, then AWS Global Accelerator would be the ideal solution.

424
Q

You have a requirement to process and transform large datasets in a distributed manner. Which AWS service can help you achieve this?

A) AWS Lambda
B) AWS Glue
C) Amazon EMR (Elastic MapReduce)
D) AWS Batch

A

C) Amazon EMR (Elastic MapReduce)

Amazon EMR is a fully managed big data processing service that allows you to process and transform large datasets using popular frameworks such as Apache Spark, Apache Hadoop, and Presto. EMR enables distributed processing across a cluster of EC2 instances, making it suitable for big data processing scenarios.

425
Q

You have a requirement to store large amounts of data with long-term retention, but with occasional access requirements. Which Amazon S3 storage class offers a combination of low-cost storage and high retrieval performance?

A) Amazon S3 Intelligent-Tiering
B) Amazon S3 Glacier
C) Amazon S3 Standard-IA
D) Amazon S3 One Zone-IA

A

A) Amazon S3 Intelligent-Tiering

Amazon S3 Intelligent-Tiering is a storage class in Amazon S3 that offers a combination of low-cost storage and high retrieval performance. It automatically moves data between two access tiers (frequent access and infrequent access) based on changing access patterns, optimizing costs while maintaining performance.

426
Q

You have a requirement for cost-effective storage with low-latency access for frequently accessed data. Which storage option in AWS offers this combination?

A) Amazon S3 Glacier
B) Amazon S3 Standard
C) Amazon EBS
D) Amazon EFS

A

B) Amazon S3 Standard

Amazon S3 Standard storage class offers cost-effective storage with low-latency access for frequently accessed data. It provides high durability, availability, and performance for storing and retrieving objects, making it suitable for frequently accessed data that requires low-latency access.

427
Q
A