AWS Solutions Architect Associate Flashcards
What is a proper definition of an IAM Role?
1) IAM Users in multiple User Groups
2) An IAM entity that defines a password policy for IAM users
3) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service
4) Permissions assigned to IAM Users to perform actions
3) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service
Some AWS services need to perform actions on your behalf. To do so, you assign permissions to AWS services with IAM Roles.
Which of the following is an IAM Security Tool?
1) IAM Credentials Report
2) IAM Root Account Manager
3) IAM Services Report
4) IAM Security Advisor
1) IAM Credentials Report
IAM Credentials report lists all your AWS Account’s IAM Users and the status of their various credentials.
Which answer is INCORRECT regarding IAM Users?
1) IAM Users can belong to multiple User Groups
2) IAM Users don’t have to belong to a User Group
3) IAM Policies can be attached directly to IAM Users
4) IAM Users access AWS services using root account credentials
4) IAM Users access AWS services using root account credentials
IAM Users access AWS services using their own credentials (username & password or Access Keys).
Which of the following is an IAM best practice?
1) Create several IAM Users for one physical person
2) Don’t use the root user account
3) Share your AWS account credentials with your colleague, so they can perform a task for you
4) Do not enable MFA for easier access
2) Don’t use the root user account
Use the root account only to create your first IAM User and a few account/service management tasks. For everyday tasks, use an IAM User.
What are IAM Policies?
1) A set of policies that defines how AWS accounts interact with each other
2) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles
3) A set of policies that define a password for IAM Users
4) A set of policies defined by AWS that show how customers interact with AWS
2) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles
What is tenancy in regards to EC2?
Tenancy defines how EC2 instances are distributed across physical hardware and affects pricing. There are three tenancy options available:
1) Shared (default) — Multiple AWS accounts may share the same physical hardware.
2) Dedicated Instance (dedicated) — Your instance runs on single-tenant hardware.
3) Dedicated Host (host) — Your instance runs on a physical server with EC2 instance capacity fully dedicated to your use, an isolated server with configurations that you can control.
Which principle should you apply regarding IAM Permissions?
1) Grant most privilege
2) Grant more permissions if your employee asks you to
3) Grant least privilege
4) Restrict root account permissions
3) Grant least privilege
Don’t give more permissions than the user needs.
What should you do to increase your root account security?
1) Remove permissions from the root account
2) Only access AWS services through AWS Command Line Interface (CLI)
3) Don’t create IAM Users, only access you AWS account using the root account
4) Enable MFA
4) Enable MFA
When you enable MFA, this adds another layer of security. Even if your password is stolen, lost, or hacked your account is not compromised.
IAM User Groups can contain IAM Users and other User Groups.
True
False
False
IAM User Groups can contain only IAM Users.
An IAM policy consists of one or more statements. A statement in an IAM Policy consists of the following, EXCEPT:
1) Effect
2) Principal
3) Version
4) Action
5) Resource
3) Version
A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Sid”: “1”,
“Effect”: “Allow”,
“Principal”: {“AWS”: [“arn:aws:iam::account-id:root”]},
“Action”: “s3:”,
“Resource”: [
“arn:aws:s3:::mybucket”,
“arn:aws:s3:::mybucket/”
]
}]
}
You have strong regulatory requirements to only allow fully internally audited AWS services in production. You still want to allow your teams to experiment in a development environment while services are being audited. How can you best set this up?
1) Provide the Dev team with a completely independent AWS account
2) Apply a global IAM policy on your Prod account
3) Create an AWS Organization and create 2 Prod and Dev OUs, then apply an SCP on the Prod OU
4) Create an AWS Config Rule
3) Create an AWS Organization and create 2 Prod and Dev OUs, then Apply an SCP on the Prod OU
You are managing the AWS account for your company, and you want to give one of the developers access to read files from an S3 bucket. You have updated the bucket policy to this, but he still can’t access the files in the bucket. What is the problem?
{ "Version": "2012-10-17", "Statement": [{ "Sid": "AllowsRead", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789012:user/Dave" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::static-files-bucket-xxx" }] }
1) Everything is okay, he just needs to logout and login again
2) The bucket does not contain any files yet
3) You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object level permission
3) You should change the resource to arn:aws:s3:::static-files-bucket-xxx/*, because this is an object level permission
You have 5 AWS Accounts that you manage using AWS Organizations. You want to restrict access to certain AWS services in each account. How should you do that?
1) Using IAM Roles
2) Using AWS Organizations SCP
3) Using AWS Config
2) Using AWS Organizations SCP
Which of the following IAM condition key you can use only to allow API calls to a specified AWS region?
1) aws:RequiredRegion
2) aws SourceRegion
3) aws:InitialRegion
4) aws:RequestedRegion
4) aws:RequestedRegion
When configuring permissions for EventBridge to configure a Lambda function as a target you should use ………………….. but when you want to configure a Kinesis Data Streams as a target you should use …………………..
1) Identity-Based Policy, Resource-Based Policy
2) Resource-Based Policy, Identity-Based Policy
3) Identity-Based Policy, Identity-Based Policy
4) Resource-Based Policy, Resource-Based Policy
2) Resource-Based Policy, Identity-Based Policy
Which AWS Directory Service is best suited for an organization looking to extend their existing on-premises Active Directory to the AWS Cloud without replicating their AD data?
1) AWS Managed Microsoft AD
2) AD Connector
3) Simple AD
4) Amazon Cognito
- AD Connector
AD Connector acts as a proxy to redirect directory requests to your existing on-premises Active Directory, allowing you to manage AWS resources without replicating your AD data
A company requires a fully managed, highly available, and scalable Active Directory service in AWS to support their Windows-based applications. Which AWS Directory Service should they use?
A. Simple AD
B. Amazon Cognito
C. AWS Managed Microsoft AD
D. AD Connector
C. AWS Managed Microsoft AD
AWS Managed Microsoft AD is a full-fledged Active Directory managed by AWS, ideal for Windows-based applications and complex AD tasks.
Which AWS Directory Service offers a cost-effective solution for small to medium-sized businesses that need basic AD capabilities such as domain joining and group policies?
A. AWS Managed Microsoft AD
B. Amazon Cognito
C. AD Connector
D. Simple AD
D. Simple AD
Simple AD is a Samba-based, AD-compatible service that provides basic Active Directory features, making it suitable for smaller businesses with basic directory service needs.
An organization wants to use its existing server-bound software licenses (such as Windows Server and SQL Server) within AWS. Which AWS Directory Service supports Bring Your Own License (BYOL) compatibility?
A. AWS Managed Microsoft AD
B. AD Connector
C. Amazon Cognito
D. Simple AD
A. AWS Managed Microsoft AD
AWS Managed Microsoft AD allows for Bring Your Own License (BYOL) compatibility, enabling the use of existing server-bound software licenses within AWS
An organization wants to ensure that their IAM policies allow access to an S3 bucket only if the requests are coming from IP addresses within their corporate network. Which IAM policy condition key should be used to achieve this?
A. aws:SourceIp
B. aws:SourceArn
C. aws:UserAgent
D. aws:SecureTransport
A. aws:SourceIp
The aws:SourceIp condition key in IAM policies is used to specify the IP address or IP address range from which the requests are allowed or denied.
A company wants to restrict access to their AWS resources, ensuring that API calls are only made using HTTPS. Which IAM policy condition key should be utilized to enforce this policy?
A. aws:SecureTransport
B. aws:SourceIp
C. aws:UserAgent
D. aws:RequestTime
A. aws:SecureTransport
The aws:SecureTransport condition key is used in IAM policies to check whether the request was sent using SSL (HTTPS).
How can an AWS Solutions Architect restrict IAM user access to resources based on the user’s tagged department, such as only allowing access to resources tagged with “Department”: “Finance”?
A. Use the aws:RequestTag/Department condition key.
B. Use the aws:TagKeys condition key.
C. Use the aws:ResourceTag/Department condition key.
D. Use the aws:User/Department condition key.
C. Use the aws:ResourceTag/Department condition key
The aws:ResourceTag/Department condition key in IAM policies allows for the specification of conditions based on the tags on the AWS resource being accessed.
To comply with regulatory requirements, a Solutions Architect needs to ensure that IAM users can only modify AWS resources if they use a specific client application. Which IAM condition key can be used to enforce this policy?
A. aws:SourceIp
B. aws:UserAgent
C. aws:RequestTag/Client
D. aws:CalledVia
B. aws:UserAgent
The aws:UserAgent condition key allows policies to specify conditions based on the client application identified in the user agent string of the request.
An organization wants to enhance the security of their AWS environment by ensuring that certain sensitive actions, like terminating EC2 instances, can only be performed by users who have authenticated using Multi-Factor Authentication (MFA). Which IAM policy condition key should be used to enforce this security requirement?
A. aws:MultiFactorAuthPresent
B. aws:SecureTransport
C. aws:TokenIssueTime
D. aws:UserAgent
A. aws:MultiFactorAuthPresent
The aws:MultiFactorAuthPresent condition key in IAM policies is used to verify whether the requester has authenticated with Multi-Factor Authentication (MFA). This condition can be set to true to enforce that the specified action is allowed only when the user is MFA-authenticated, enhancing the security for sensitive operations.
Example:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:TerminateInstances", "Resource": "*", "Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}} } ] }
What is a primary feature of AWS IAM Identity Center?
A. It allows for the creation and management of AWS resources.
B. It is a web service for organizing and federating access to AWS accounts and business applications.
C. It is used for hardware-based key storage for cryptographic operations.
D. It offers a centralized service for billing and cost management of AWS resources.
B. It is a web service for organizing and federating access to AWS accounts and business applications.
AWS IAM Identity Center is designed to help manage access to AWS accounts and business applications, providing a single location to centrally manage access. It allows users to sign in once and access multiple accounts and applications.
Which feature does AWS IAM Identity Center (SSO) provide to enhance security and streamline user access management across multiple AWS accounts?
A. Multi-factor authentication for each AWS account separately.
B. Centralized user access to multiple AWS accounts with a single set of credentials.
C. Automated encryption and decryption of AWS secrets.
D. Direct control over the underlying EC2 instances running IAM services.
B. Centralized user access to multiple AWS accounts with a single set of credentials.
AWS IAM Identity Center allows users to access multiple AWS accounts and applications using a single set of credentials, thereby centralizing and streamlining user access management
In AWS IAM Identity Center, which of the following best describes the function of permission sets?
A. They are used to assign EC2 instance types to users.
B. They define the IAM roles that users can assume in AWS accounts.
C. They encrypt data stored in S3 buckets.
D. They monitor and log user activity within the AWS Management Console.
B. They define the IAM roles that users can assume in AWS accounts
In AWS IAM Identity Center, permission sets are used to define the IAM roles that can be assumed by users when accessing AWS accounts. This allows for fine-grained access control
How does AWS IAM Identity Center integrate with existing corporate directories?
A. It replaces the need for a corporate directory with its own user database.
B. It provides a physical storage solution for corporate directory data.
C. It enables integration with existing directories like Microsoft Active Directory for user authentication.
D. It only integrates with AWS Managed Microsoft AD and not with any on-premises directories.
C. It enables integration with existing directories like Microsoft Active Directory for user authentication.
AWS IAM Identity Center supports integration with existing corporate directories, such as Microsoft Active Directory, to authenticate and manage user access, allowing for a seamless connection between AWS and existing user management systems
Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?
1) Convertible Reserved Instances
2) Dedicated Hosts
3) Spot Instances
3) Spot Instances
Spot Instances are good for short workloads and this is the cheapest EC2 Purchasing Option. But, they are less reliable because you can lose your EC2 instance.
What should you use to control traffic in and out of EC2 instances?
1) Network Access Control List (NACL)
2) Security Groups
3) IAM Policies
2) Security Groups
Security Groups operate at the EC2 instance level and can control traffic.
How long can you reserve an EC2 Reserved Instance?
1) 1 or 3 years
2) 2 or 4 years
3) 6 months or 1 year
4) Anytime between 1 and 3 years
1) 1 or 3 years
EC2 Reserved Instances can be reserved for 1 or 3 years only.
You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?
1) Storage Optimized
2) Compute Optimized
3) Memory Optimized
4) General Purpose
2) Compute Optimized
Compute Optimized EC2 instances are great for compute-intensive workloads requiring high-performance processors (e.g., batch processing, media transcoding, high-performance computing, scientific modeling & machine learning, and dedicated gaming servers).
Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?
1) Reserved Instances
2) Spot Instances
3) On-Demand Instances
1) Reserved Instances
Reserved Instances are good for long workloads. You can reserve EC2 instances for 1 or 3 years.
You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?
1) Connect to each EC2 instance using SSH, then install the required software and update your OS packages manually
2) Write a bash script that installs the required software and updates to your OS, then contact AWS Support and provide them with the script. They will run it on your EC2 instances at launch
3) Write a bash script that installs the required software and updates to your OS, then use this script in the EC2 User Data when you launch your EC2 instances
3) EC2 User Data
EC2 User Data is used to bootstrap your EC2 instances using a bash script. This script can contain commands such as installing software/packages, download files from the Internet, or anything you want.
Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?
1) Storage Optimized
2) Compute Optimized
3) Memory Optimized
4) General Purpose
3) Memory Optimized
Memory Optimized EC2 instances are great for workloads requiring large data sets in memory.
You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database has thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?
1) Storage Optimized
2) Compute Optimized
3) Memory Optimized
4) General Purpose
1) Storage Optimized
Storage Optimized EC2 instances are great for workloads requiring high, sequential read/write access to large data sets on local storage.
Security Groups can be attached to only one EC2 instance.
True
False
False
Security Groups can be attached to multiple EC2 instances within the same AWS Region/VPC.
You’re planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?
1) Convertible Reserved Instances
2) Dedicated Hosts
3) Spot Instances
2) Dedicated Hosts
Dedicated Hosts are good for companies with strong compliance needs or for software that have complicated licensing models. This is the most expensive EC2 Purchasing Option available.
You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?
1) Spot Instances
2) On-Demand
3) Dedicated Hosts
4) Reserved Instances
3) Dedicated Hosts
Spot Fleet is a set of Spot Instances and optionally ……………
1) Dedicated Instances
2) On-Demand Instances
3) Dedicated Hosts
4) Reserved Instances
2) On-Demand Instances
Spot Fleet is a set of Spot Instances and optionally On-demand Instances. It allows you to automatically request Spot Instances with the lowest price.
You have an e-commerce website and you are preparing for Black Friday which is the biggest sale of the year. You expect that your traffic will increase by 100x. Your website already using an SQS Standard Queue, and you’re running a fleet of EC2 instances in an Auto Scaling Group to consume SQS messages. What should you do to prepare your SQS Queue?
1) Contact AWS Support to pre-warm your SQS Standard Queue
2) Enable Auto Scaling in your SQS queue
3) Increase the capacity of the SQS queue
4) Do nothing, SQS scales automatically
4) Do nothing, SQS scales automatically
You have an SQS Queue where each consumer polls 10 messages at a time and finishes processing them in 1 minute. After a while, you noticed that the same SQS messages are received by different consumers resulting in your messages being processed more than once. What should you do to resolve this issue?
1) Enable Long Polling
2) Add DelaySeconds parameter to the messages when being produced
3) Increase the Visibility Timeout
4) Decrease the Visibility Timeout
3) Increase the Visibility Timeout
SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)
Which SQS Queue type allows your messages to be processed exactly once and in order?
1) SQS Standard Queue
2) SQS Dead Letter Queue
3) SQS Delay Queue
4) SQS FIFO Queue
4) SQS FIFO Queue
SQS FIFO (First-In-First-Out) Queues have all the capabilities of the SQS Standard Queue, plus the following two features. First, The order in which messages are sent and received are strictly preserved and a message is delivered once and remains available until a consumer process and deletes it. Second, duplicated messages are not introduced into the queue.
You have 3 different applications that you’d like to send them the same message. All 3 applications are using SQS. What is the best approach would you choose?
1) Use SQS Replication Feature
2) Use SNS + SQS Fan Out Pattern
3) Send messages Individually to 3 SQS queues
2) Use SNS + SQS Fan Out Pattern
This is a common pattern where only one message is sent to the SNS topic and then “fan-out” to multiple SQS queues. This approach has the following features: it’s fully decoupled, no data loss, and you have the ability to add more SQS queues (more applications) over time.
You have a Kinesis data stream with 6 shards provisioned. This data stream usually receiving 5 MB/s of data and sending out 8 MB/s. Occasionally, your traffic spikes up to 2x and you get a ProvisionedThroughputExceeded exception. What should you do to resolve the issue?
1) Add more Shards
2) Enable Kinesis Replication
3) Use SQS as a buffer to Kinesis
1) Add more Shards
The capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of reading data calls. Each shard allows for 1 MB/s incoming data and 2 MB/s outgoing data. You should increase the number of shards within your data stream to provide enough capacity.
You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users’ data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?
1) There are too many shards, you should only use 1 shard
2) You shouldn’t use multiple consumers, only one and it should re-order data
3) For each record sent to Kinesis, use a partition key that represents the identity of the user
3) For each record sent to Kinesis, use a partition key that represents the identity of the user
Kinesis Data Stream uses the partition key associated with each data record to determine which shard a given data record belongs to. When you use the identity of each user as the partition key, this ensures the data for each user is ordered hence sent to the same shard.
You are running an application that produces a large amount of real-time data that you want to load into S3 and Redshift. Also, these data need to be transformed before being delivered to their destination. What is the best architecture would you choose?
1) SQS + AWS Lambda
2) SNS + HTTP Endpoint
3) Kinesis Data Streams + Kinesis Data Firehose
3) Kinesis Data Streams + Kinesis Data Firehose
This is a perfect combo of technology for loading data near real-time data into S3 and Redshift. Kinesis Data Firehose supports custom data transformations using AWS Lambda.
Which of the following is NOT a supported subscriber for AWS SNS?
1) Amazon Kinesis Data Streams
2) Amazon SQS
3) HTTP(S) Endpoint
4) AWS Lambda
1) Amazon Kinesis Data Streams
Note: Kinesis Data Firehose is now supported, but not Kinesis Data Streams.
Which AWS service helps you when you want to send email notifications to your users?
1) Amazon SQS with AWS Lambda
2) Amazon SNS
3) Amazon Kinesis
2) Amazon SNS
You’re running many micro-services applications on-premises and they communicate using a message broker that supports MQTT protocol. You’re planning to migrate these applications to AWS without re-engineering the applications and modifying the code. Which AWS service allows you to get a managed message broker that supports the MQTT protocol?
1) Amazon SQS
2) Amazon SNS
3) Amazon Kinesis
4) Amazon MQ
4) Amazon MQ
Amazon MQ supports industry-standard APIs such as JMS and NMS, and protocols for messaging, including AMQP, STOMP, MQTT, and WebSocket.
An e-commerce company is preparing for a big marketing promotion that will bring millions of transactions. Their website is hosted on EC2 instances in an Auto Scaling Group and they are using Amazon Aurora as their database. The Aurora database has a bottleneck and a lot of transactions have been failed in the last promotion they have made as they had a lot of transaction and the Aurora database wasn’t prepared to handle these too many transactions. What do you recommend to handle those transactions and prevent any failed transactions?
1) Use SQS as a buffer to write to Aurora
2) Host the website in AWS Fargate instead of EC2 instances
3) Migrate Aurora to RDS for SQL Server
1) Use SQS as a buffer to write to Aurora
A company is using Amazon Kinesis Data Streams to ingest clickstream data and then do some analytical processes on it. There is a campaign in the next few days and the traffic is unpredictable which might grow up to 100x. What Kinesis Data Stream capacity mode do you recommend?
1) Provisioned Mode
2) On-demand Mode
2) On-demand Mode
What type of firewall can be used in conjunction with API Gateway to help prevent DDoS attacks?
1) Security group
2) Web Application Firewall (WAF)
3) Host firewall
4) NACL
2) Web Application Firewall (WAF)
If your application needs to process 5,000 messages per second, which type of SQS queue would you use?
1) SQS Performance
2) SQS Standard
3) SQS FIFO
4) SQS paired with an EC2 Auto Scaling Group
2) SQS Standard
Auto Scaling groups are for EC2 instances only
You need to create a new message broker application in AWS. The new application needs to support the JMS messaging protocol. Which service fits your needs?
1) Amazon MQ
2) Amazon SQS
3) Amazon SNS
4) Amazon MSK
1) Amazon MQ
Which of the following endpoints can use a custom delivery policy to define how Amazon SNS retries the delivery of messages when server-side errors occur?
1) Email
2) SMS
3) HTTP/S
4) It can’t retry messages
3) HTTP/S
Which service allows for bidirectional data flows between AWS and SaaS applications?
1) AWS AppSync
2) Amazon S3 Replication
3) Amazon AppFlow
4) Amazon MSK
3) Amazon AppFlow
Which tool can be used to sideline malformed SQS messages?
1) It can’t be done
2) Side-letter queues (SLQ)
3) Alive-letter queues (ALQ)
4) Dead-letter queues (DLQ)
4) Dead-letter queues (DLQ)
Which AWS service would you choose for serverless orchestration of long-running (up to 1 year) workflows that can integrate with several other AWS services?
1) AWS Step Functions
2) AWS EC2 Spot Instances
3) AWS Lambda
4) Amazon MQ
1) AWS Step Functions
This is a serverless orchestration service combining different AWS services for business applications.
Which layers of our applications need to be loosely coupled?
1) Internal and external
2) Just internal
3) None — it’s bad practice to loosely couple applications
4) Just external
1) Internal and external
All levels of your architecture need to be loosely coupled!
What is the largest message size you can store in SQS?
1) 256KB
2) 512KB
3) 128KB
4) 1MB
1) 256KB
You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can’t access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?
1) Allocate an Elastic IP and assign it to your EC2 instance
2) From inside your EC2 instance OS, change network configuration from DHCP to static and assign it a public IPv4
3) Contact AWS Support and request a fixed public IPv4 to your EC2 instance
4) This can’t be done, you can only assign a fixed private IPv4 to your EC2 instance
1) Allocate an Elastic IP and assign it to your EC2 instance
Elastic IP is a public IPv4 that you own as long as you want and you can attach it to one EC2 instance at a time.
You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?
1) Spread Placement Group
2) Cluster Placement Group
3) Partition Placement Group
2) Cluster Placement Group
Cluster Placement Groups place your EC2 instances next to each other which gives you high-performance computing and networking.
You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there’s an AZ failure. Which EC2 Placement Group should you choose?
1) Spread Placement Group
2) Cluster Placement Group
3) Partition Placement Group
1) Spread Placement Group
Spread Placement Group places your EC2 instances on different physical hardware across different AZs.
Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.
True
False
False
Elastic Network Interfaces (ENIs) are bounded to a specific AZ. You can not attach an ENI to an EC2 instance in a different AZ.
The following are true regarding EC2 Hibernate, EXCEPT:
1) EC2 Instance Root Volume must be an Instance Store volume
2) Supports On-Demand and Reserved Instances
3) EC2 Instance RAM must be less than 150 GB
4) EC2 Instance Root Volume type must be an EBS volume
1) EC2 Instance Root Volume must be an Instance Store volume
To enable EC2 Hibernate, the EC2 Instance Root Volume type must be an EBS volume and must be encrypted to ensure the protection of sensitive content.
When would you need to create an EC2 Dedicated Instance?
1) When you need to make sure that AWS support can assist you with a hardware failure
2) When you have an auditing requirement to run your hosts on single-tenant hardware
3) When you want to ensure that your instance will never fail
4) When you need the cheapest price for an instance
2) When you have an auditing requirement to run your hosts on single-tenant hardware
What is EC2 metadata commonly used for?
1) To configure your Security Groups
2) When your code needs to learn something about the EC2 instances that it’s running on
3) When an S3 bucket needs to see where an object was uploaded from
4) To determine how long an instance has been online, which AWS uses to calculate your bill
2) When your code needs to learn something about the EC2 instances that it’s running on
EC2 instance metadata can be used to configure or manage a running instance, and can also be used to access user data that was specified when the instance was launched
What does AWS Outposts do?
1) A remote monitoring tool used to monitor your private cloud
2) Allows you to extend your data center to AWS GovCloud
3) Edge computing device designed for airplanes
4) Allows you to extend the power of the AWS data center to your own data center
4) Allows you to extend the power of the AWS data center to your own data center
Outposts allows you to extend the AWS data center to your own data center
What happens when your Spot instance is chosen by AWS for termination?
1) You will get a 10-minute notification sent to your specified email address.
2) You will get a one-hour notification sent via Amazon SNS directly to your EC2 instance on port 65.
3) While it is possible that your Spot Instance is interrupted before the warnings can be made, AWS makes a best effort to provide two-minute Spot Instance interruption notices to the metadata of your EC2 instance(s).
4) Your will get no notification and your host will be terminated without warning.
3) While it is possible that your Spot Instance is interrupted before the warnings can be made, AWS makes a best effort to provide two-minute Spot Instance interruption notices to the metadata of your EC2 instance(s).
If your Spot Instance has been marked for termination, a notification will be best-effort posted to the metadata of your EC2 instance two minutes before it is stopped or terminated
What service allows you to directly visualize your data in AWS?
1) S3
2) Redshift
3) EMR
4) QuickSight
4) QuickSight
QuickSight allows you to create dashboards and visualize your data
Which of the following scenarios are valid use cases for AWS Data Pipeline? (choose 3)
1) Exporting Amazon RDS data to Amazon S3
2) Restarting Amazon EC2 instances
3) Importing and exporting Amazon DynamoDB data
4) Copying CSV files between Amazon S3 buckets
5) Copying CSV data between two on-premises storage devices
1) Exporting Amazon RDS data to Amazon S3
3) Importing and exporting Amazon DynamoDB data
4) Copying CSV files between Amazon S3 buckets
If you need to create a new streaming application requiring Apache Kafka as the primary component, which AWS service would be the best fit for this requirement?
1) Amazon MQ
2) Amazon Managed Streaming for Apache Kafka (MSK)
3) Amazon OpenStreaming Service
4) Amazon Kinesis
2) Amazon Managed Streaming for Apache Kafka (MSK)
Amazon MSK is a fully managed service for running data-streaming applications that leverage Apache Kafka.
What type of database is Redshift?
1) Non-relational
2) Relational
3) NoSQL
4) Unrelational
2) Relational
Redshift is a relational database
What AWS service allows you to run SQL queries against exabytes of unstructured data in Amazon S3 without needing to load or transform the data?
1) Amazon X-Ray
2) Amazon Redshift Serverless
3) Amazon OpenSearch Service
4) Amazon Redshift Spectrum
4) Amazon Redshift Spectrum
Redshift Spectrum allows you to directly run SQL queries against exabytes of unstructured data in Amazon S3. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Parquet, and others. Redshift Spectrum automatically scales query compute capacity based on the data being retrieved, so queries against Amazon S3 run fast, regardless of data set size
What service would you use to create a logging solution involving visualization of log file analytics or BI reports?
1) Amazon Athena
2) Amazon OpenSearch Service (successor to Elasticsearch)
3) Amazon S3
4) Amazon EMR
2) Amazon OpenSearch Service (successor to Elasticsearch)
Which AWS service would be best for analyzing large volumes of data, handling complex queries efficiently, delivering fast query performance, and having the ability to scale effectively to support future data growth?
1) Amazon Redshift
2) Amazon S3
3) DynamoDB
4) Amazon RDS
1) Amazon Redshift
Redshift would be the best solution for analyzing large volumes of data with complex queries, fast query performance, and scalability. Amazon Redshift is specifically designed for data warehousing and analytics workloads. It provides columnar storage, parallel query execution, and automatic scaling capabilities to handle large datasets and complex queries efficiently
If you needed to implement a managed ETL service for automating your movement of data between AWS services, which service would best fit your needs?
1) Amazon S3 Event Notifications
2) Amazon ETL
3) AWS Data Pipeline
4) Amazon EventBridge
3) AWS Data Pipeline
AWS Data Pipeline is a managed extract, transform, load (ETL) service for automating movement and transformation of your data
Which of the following statements is true about AWS Glue?
1) In AWS Glue, you can specify the number of DPUs (data processing units) you want to allocate to an ETL job.
2) Auto Scaling based on a workload is NOT a serverless feature in AWS Glue.
3) On the Free tier, AWS Glue will store 1,000 objects for free.
4) AWS Glue lets you discover and connect up to 10 different data sources.
1) In AWS Glue, you can specify the number of DPUs (data processing units) you want to allocate to an ETL job.
You can specify the number of DPUs for an ETL job. A Glue ETL job must have a minimum of 2 DPUs. AWS Glue allocates 10 DPUs to each ETL job by default.
You can use _____ to build a schema for your data, and _____ to query the data that’s stored in S3
1) EC2, SQS
2) EC2, Glue
3) Athena, Lambda
4) Glue, Athena
4) Glue, Athena
Which service provides the easiest way to run ad hoc queries across multiple objects in S3 without the need to set up or manage any servers?
1 EMR
2) Glue
3) S3
4) Athena
4) Athena
Which AWS service offers a fully managed way of running search and analytics engines?
1) AWS Athena
2) Amazon Elastic Analytics Service
3) Amazon QuickSight
4) Amazon OpenSearch Service
4) Amazon OpenSearch Service
_____ provides real-time streaming of data.
1) Kinesis Data Analytics
2) SQS
3) Kinesis Data Streams
4) Kinesis Data Firehose
3) Kinesis Data Streams
You would like to have a database that is efficient at performing analytical queries on large sets of columnar data. You would like to connect to this Data Warehouse using a reporting and dashboard tool such as Amazon QuickSight. Which AWS technology do you recommend?
1) Amazon RDS
2) Amazon S3
3) Amazon Redshift
4) Amazon Neptune
3) Amazon Redshift
You have a lot of log files stored in an S3 bucket that you want to perform a quick analysis, if possible Serverless, to filter the logs and find users that attempted to make an unauthorized action. Which AWS service allows you to do so?
1) Amazon DynamoDB
2) Amazon Redshift
3) S3 Glacier
4) Amazon Athena
4) Amazon Athena
As a Solutions Architect, you have been instructed you to prepare a disaster recovery plan for a Redshift cluster. What should you do?
1) Enable Multi-AZ
2) Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS region
3) Take a snapshot, then restore to a Redshift Global cluster
2) Enable Automated Snapshots, then configure your Redshift cluster to automatically copy snapshots to another AWS region
Which feature in Redshift forces all COPY and UNLOAD traffic moving between your cluster and data repositories through your VPCs?
1) Enhanced VPC Routing
2) Improved VPC Routing
3) Redshift Spectrum
1) Enhanced VPC Routing
You are running a gaming website that is using DynamoDB as its data store. Users have been asking for a search feature to find other gamers by name, with partial matches if possible. Which AWS technology do you recommend to implement this feature?
1) Amazon DynamoDB
2) Amazon Redshift
3) Amazon OpenSearch Service
4) Amazon Neptune
3) Amazon OpenSearch Service
An AWS service allows you to create, run, and monitor ETL (extract, transform, and load) jobs in a few clicks.
1) AWS Glue
2) Amazon Redshift
3) Amazon RDS
4) Amazon DynamoDB
1) AWS Glue
A company is using AWS to host its public websites and internal applications. Those different websites and applications generate a lot of logs and traces. There is a requirement to centrally store those logs and efficiently search and analyze those logs in real-time for detection of any errors and if there is a threat. Which AWS service can help them efficiently store and analyze logs?
1) Amazon S3
2) Amazon OpenSearch service
3) Amazon ElastiCache
4) Amazon OLDB
2) Amazon OpenSearch service
……………………….. makes it easy and cost-effective for data engineers and analysts to run applications built using open source big data frameworks such as Apache Spark, Hive, or Presto without having to operate or manage clusters.
1) AWS Lambda
2) Amazon EMR
3) Amazon Athena
4) Amazon OpenSearch Service
2) Amazon EMR
An e-commerce company has all its historical data such as orders, customers, revenues, and sales for the previous years hosted on a Redshift cluster. There is a requirement to generate some dashboards and reports indicating the revenues from the previous years and the total sales, so it will be easy to define the requirements for the next year. The DevOps team is assigned to find an AWS service that can help define those dashboards and have native integration with Redshift. Which AWS service is best suited?
1) Amazon OpenSearch Service
2) Amazon Athena
3) Amazon QuickSight
4) Amazon EMR
3) Amazon QuickSight
Which AWS Glue feature allows you to save and track the data that has already been processed during a previous run of a Glue ETL job?
1) Glue Job Bookmarks
2) Glue Elastic Views
3) Glue Streaming ETL
4) Glue DataBrew
1) Glue Job Bookmarks
You are a DevOps engineer in a machine learning company which 3 TB of JSON files stored in an S3 bucket. There’s a requirement to do some analytics on those files using Amazon Athena and you have been tasked to find a way to convert those files’ format from JSON to Apache Parquet. Which AWS service is best suited?
1) S3 Object Versioning
2) Kinesis Data Streams
3) Amazon MSK
4) AWS Glue
4) AWS Glue
You have an on-premises application that is used together with an on-premises Apache Kafka to receive a stream of clickstream events from multiple websites. You have been tasked to migrate this application as soon as possible without any code changes. You decided to host the application on an EC2 instance. What is the best option you recommend to migrate Apache Kafka?
1) Kinesis Data Streams
2) AWS Glue
3) Amazon MSK
4) Kinesis Data Analytics
3) Amazon MSK
You have data stored in RDS, S3 buckets and you are using AWS Lake Formation as a data lake to collect, move and catalog data so you can do some analytics. You have a lot of big data and ML engineers in the company and you want to control access to part of the data as it might contain sensitive information. What can you use?
1) Lake Formation Fine-grained Access Control
2) Amazon Cognito
3) AWS Shield
4) S3 Object Lock
1) Lake Formation Fine-grained Access Control
Which AWS service is most appropriate when you want to perform real-time analytics on streams of data?
1) Amazon SQS
2) Amazon SNS
3) Amazon Kinesis Data Analytics
4) Amazon Kinesis Data Firehose
3) Amazon Kinesis Data Analytics
You have multiple Docker-based applications hosted on-premises that you want to migrate to AWS. You don’t want to provision or manage any infrastructure; you just want to run your containers on AWS. Which AWS service should you choose?
1) ECS in EC2 Launch Mode
2) ECR
3) AWS Fargate on ECS
3) AWS Fargate on ECS
AWS Fargate allows you to run your containers on AWS without managing any servers.
Amazon Elastic Container Service (ECS) has two Launch Types: ……………… and ………………
1) Amazon EC2 Launch Type and Fargate Launch Type
2) Amazon EC2 Launch Type and EKS Launch Type
3) Fargate Launch Type and EKS Launch Type
1) Amazon EC2 Launch Type and Fargate Launch Type
You have an application hosted on an ECS Cluster (EC2 Launch Type) where you want your ECS tasks to upload files to an S3 bucket. Which IAM Role for your ECS Tasks should you modify?
1) EC2 Instance Profile
2) ECS Task Role
2) ECS Task Role
ECS Task Role is the IAM Role used by the ECS task itself. Use when your container wants to call other AWS services like S3, SQS, etc.
You’re planning to migrate a WordPress website running on Docker containers from on-premises to AWS. You have decided to run the application in an ECS Cluster, but you want your docker containers to access the same WordPress website content such as website files, images, videos, etc. What do you recommend to achieve this?
1) Mount an EFS volume
2) Mount an EBS volume
3) Use an EC2 Instance Store
1) Mount an EFS volume
EFS volume can be shared between different EC2 instances and different ECS Tasks. It can be used as a persistent multi-AZ shared storage for your containers.
You are deploying an application on an ECS Cluster made of EC2 instances. Currently, the cluster is hosting one application that is issuing API calls to DynamoDB successfully. Upon adding a second application, which issues API calls to S3, you are getting authorization issues. What should you do to resolve the problem and ensure proper security?
1) Edit the EC2 instance role to add permissions to S3
2) Create an IAM task role for the new application
3) Enable the Fargate mode
4) Edit the S3 bucket policy to allow the ECS task
2) Create an IAM task role for the new application
You are migrating your on-premises Docker-based applications to Amazon ECS. You were using Docker Hub Container Image Library as your container image repository. Which is an alternative AWS service which is fully integrated with Amazon ECS?
1) AWS Fargate
2) ECR
3) EKS
4) EC2
2) ECR
Amazon ECR is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images. ECR is fully integrated with Amazon ECS, allowing easy retrieval of container images from ECR while managing and running containers using ECS.
Amazon EKS supports the following node types, EXCEPT ………………..
1) Managed Node Groups
2) Self-Managed Nodes
3) AWS Fargate
4) AWS Lambda
4) AWS Lambda
A developer has a running website and APIs on his local machine using containers and he wants to deploy both of them on AWS. The developer is new to AWS and doesn’t know much about different AWS services. Which of the following AWS services allows the developer to build and deploy the website and the APIs in the easiest way according to AWS best practices?
1) AWS App Runner
2) EC2 Instances & Application Load Balancer
3) Amazon ECS
4) AWS Fargate
1) AWS App Runner
In Amazon ECS, what is the role of a task definition?
A. To define the EC2 instances that run the application containers.
B. To manage user access and permissions for containerized applications.
C. To provide a blueprint for running Docker containers, including the container image and resource allocation.
D. To balance the load across multiple containers and distribute incoming traffic.
C. To provide a blueprint for running Docker containers, including the container image and resource allocation.
In Amazon ECS, a task definition is a blueprint for your application that describes how a container should run, including details like the Docker image, CPU and memory allocations, environment variables, and more.
Which service would you use in AWS to orchestrate and manage a cluster of containers using Kubernetes?
A. Amazon ECS
B. Amazon EKS
C. AWS Fargate
D. AWS Lambda
B. Amazon EKS
Amazon EKS (Elastic Kubernetes Service) is a managed service that makes it easier to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
For containerized applications requiring persistent storage, which AWS service can be integrated with Amazon EKS to provide dynamic volume provisioning?
A. Amazon EBS
B. Amazon S3
C. AWS CloudFormation
D. Amazon VPC
A. Amazon EBS
Amazon EBS (Elastic Block Store) can be used with Amazon EKS to provide persistent block storage for containerized applications. EBS volumes can be dynamically provisioned as part of the EKS deployment process.
A company wants to deploy a new web application in AWS using containers. They need to ensure high availability and load balancing across multiple Availability Zones. Which combination of services would be most appropriate for this requirement?
A. Amazon ECS with AWS Fargate and Amazon Route 53
B. Amazon EKS with EC2 Auto Scaling Groups and AWS Lambda
C. Amazon EC2 with Elastic Load Balancing and Amazon S3
D. AWS Lambda with Amazon API Gateway and Amazon DynamoDB
A. Amazon ECS with AWS Fargate and Amazon Route 53
Amazon ECS with AWS Fargate allows for serverless container deployments, and when combined with Elastic Load Balancing and Route 53, it offers high availability across multiple Availability Zones and efficient traffic distribution.
A media company is processing large video files using a containerized batch processing application. They need to process jobs as they arrive without over-provisioning resources. What is the most cost-effective AWS solution for this scenario?
A. Deploy the application on Amazon EC2 instances managed by EC2 Auto Scaling.
B. Utilize AWS Batch with Spot Instances for processing jobs.
C. Use Amazon EKS with On-Demand EC2 Instances.
D. Implement the application as AWS Lambda functions triggered by Amazon S3 events.
B. Utilize AWS Batch with Spot Instances for processing jobs.
AWS Batch efficiently runs batch jobs and, when combined with Spot Instances, can provide a cost-effective solution for processing jobs as they arrive without the need for over-provisioning.
An enterprise is running a microservices architecture on AWS using Amazon EKS. They need to ensure that each microservice can scale independently based on demand. Which feature should they implement?
A. EC2 Auto Scaling Groups with custom scaling policies for each microservice.
B. Horizontal Pod Autoscaler in EKS for each microservice deployment.
C. AWS Fargate with scheduled scaling actions.
D. Amazon ECS service autoscaling for each microservice.
B. Horizontal Pod Autoscaler in EKS for each microservice deployment.
The Horizontal Pod Autoscaler in Amazon EKS automatically scales the number of pods in a deployment based on observed CPU utilization or other selected metrics, ideal for independently scaling microservices.
A financial services company needs to run a mission-critical application with strict compliance and security requirements. The application must be hosted in a containerized environment. Which setup should they use?
A. Amazon ECS with AWS Fargate running in a private subnet and integration with AWS Key Management Service for encryption.
B. Amazon EC2 instances with Docker, running in public subnets with security groups and NACLs configured for security.
C. AWS Lambda functions for each component of the application, with VPC peering for connectivity to on-premises systems.
D. Amazon EKS with dedicated EC2 instances, running within a private subnet and using IAM roles for secure access to AWS services.
D. Amazon EKS with dedicated EC2 instances, running within a private subnet and using IAM roles for secure access to AWS services.
Amazon EKS provides a secure and scalable environment for containerized applications. Using dedicated EC2 instances in a private subnet enhances security, and IAM roles ensure secure access to other AWS services, meeting compliance and security needs.
What AWS service can create EC2 instances and place containers in them based on your task definitions?
1) ELB
2) Lambda
3) Docker
4) ECS
4) ECS
ECS manages this process for you
Which of the following are features of Amazon Elastic Container Registry (Amazon ECR)?
Choose 3:
1) Scan on Push
2) Report Personal Identifiable Information (PII) on Push
3) Lifecycle policies
4) Duplicate images
5) Image tag immutability
1) Scan on Push
3) Lifecycle policies
5) Image tag immutability
You expect your new application to have variable reads and writes to the relational database. Which service allows you to test the optimal sizing of your instances while also keeping your budget in mind?
1) Amazon RDS
2) Amazon Aurora Serverless
3) MySQL on EC2
4) DynamoDB
2) Amazon Aurora Serverless
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It is perfect for when you are running workloads that have sudden and unpredictable increases in activity, but you need to plan capacity. Since it is serverless, you also only pay for what you consume.
How can you easily collect insights regarding requests and responses for your AWS Lambda application?
1) Amazon CloudWatch
2) AWS CloudTrail
3) AWS X-Ray
4) Amazon OpenSearch
3) AWS X-Ray
When you see requests and responses, think AWS X-Ray. AWS X-Ray is a service that collects data about requests that your application serves. It provides tools that you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization.
How can you run your Kubernetes clusters on-premises while easily maintaining AWS best practices?
1) Amazon ECS Anywhere
2) Amazon EKS Anywhere
3) Reference the AWS Well-Architected Framework
4) VMware on AWS
2) Amazon EKS Anywhere
Amazon EKS Anywhere provides a means of managing Kubernetes clusters using the same operational excellence and best practices that AWS uses for its Amazon Elastic Kubernetes Service (Amazon EKS). It leverages the EKS Distro (EKS-D) for deploying, using, and managing Kubernetes clusters that run in your data centers.
You have decided to deploy an Amazon Aurora Serverless database. What do you specify to set the scaling limits?
1) Aurora capacity units
2) Aurora scaling units
3) Amazon Aurora Reserved Instances
4) DAX
1) Aurora capacity units
These are how the clusters scale. They are based on a certain amount of compute and memory. You can set a minimum and maximum for automatically scaling between the units.
You need an AWS-managed GraphQL interface for development. Which AWS service would meet this requirement?
1) AWS AppSync
2) Amazon Managed Grafana
3) Amazon Amplify
4) AWS Lambda
1) AWS AppSync
AWS AppSync provides a robust, scalable GraphQL interface for application developers to combine data from multiple sources, including Amazon DynamoDB, AWS Lambda, and HTTP APIs.
What is one thing EC2 instances allow you to configure but a serverless application doesn’t?
1) The ability to pay for the service.
2) VPC placement
3) The ability to configure the service.
4) Operating System
4) Operating System
In a serverless application, you don’t have access to the OS
What is the maximum amount of RAM you can allocate to a single Lambda function?
1) 512MB
2) 10GB
3) 1GB
4) 5GB
2) 10GB
Lambda supports up to 10GB of RAM
What feature of ECS and EKS allows you to run containers without having to manage the underlying hosts?
1) Fargate
2) S3
3) EC2
4) Lambda
1) Fargate
Which IAM entity is assigned to a Lambda function to provide it with permissions to access other AWS APIs?
1) Group
2) Role
3) Username and password
4) Secret Key and Access Key
2) Role
Roles should be used for Lambda to talk to other AWS APIs. Reference Documentation: AWS Lambda execution role
Which distribution allows you to leverage Amazon EKS Anywhere?
1) Amazon EKS Library
2) Amazon EKS Distro (EKS-D)
3) Amazon EKS Anywhere is not a real option
4) Amazon EKS Open-Source
2) Amazon EKS Distro (EKS-D)
Amazon EKS Distro (EKS-D) is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service (EKS) to create reliable and secure Kubernetes clusters.
You have created a Lambda function that typically will take around 1 hour to process some data. The code works fine when you run it locally on your machine, but when you invoke the Lambda function it fails with a “timeout” error after 3 seconds. What should you do?
1) Configure your Lambda’s timeout to 25 minutes
2) Configure your Lambda’s memory to 10 GB
3) Run your code somewhere else (e.g. EC2 instance)
3) Run your code somewhere else (e.g. EC2 instance)
Lambda’s maximum execution time is 15 minutes. You can run your code somewhere else such as an EC2 instance or use Amazon ECS.
Before you create a DynamoDB table, you need to provision the EC2 instance the DynamoDB table will be running on.
True
False
False
DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain or operate. It automatically scales tables up and down to adjust for capacity and maintain performance. It provides both provisioned (specify RCU & WCU) and on-demand (pay for what you use) capacity modes.
You have provisioned a DynamoDB table with 10 RCUs and 10 WCUs. A month later you want to increase the RCU to handle more read traffic. What should you do?
1) Increase RCU and keep WCU the same
2) You need to increase both RCU and WCU
3) Increase RCU and decrease WCU
1) Increase RCU and keep WCU the same
RCU and WCU are decoupled, so you can increase/decrease each value separately.
You have an e-commerce website where you are using DynamoDB as your database. You are about to enter the Christmas sale and you have a few items which are very popular and you expect that they will be read often. Unfortunately, last year due to the huge traffic you had the ProvisionedThroughputExceededException exception. What would you do to prevent this error from happening again?
1) Increase the RCU to a very high value
2) Create a DAX Cluster
3) Migrate the database away from DynamoDB for the time of the sale
2) Create a DAX Cluster
DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x performance improvement. It caches the most frequently used data, thus offloading the heavy reads on hot keys off your DynamoDB table, hence preventing the “ProvisionedThroughputExceededException” exception.
You have developed a mobile application that uses DynamoDB as its datastore. You want to automate sending welcome emails to new users after they sign up. What is the most efficient way to achieve this?
1) Schedule a Lambda function to run every minute using CloudWatch Events, scan the entire table looking for new users
2) Enable SNS and DynamoDB integration
3) Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails
3) Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails
DynamoDB Streams allows you to capture a time-ordered sequence of item-level modifications in a DynamoDB table. It’s integrated with AWS Lambda so that you create triggers that automatically respond to events in real-time. There is no such SNS and Dynamo DB integration
To create a serverless API, you should integrate Amazon API Gateway with ………………….
1) EC2 Instance
2) Elastic Load Balancing
3) AWS Lambda
3) AWS Lambda
When you are using an Edge-Optimized API Gateway, your API Gateway lives in CloudFront Edge Locations across all AWS Regions.
True
False
False
An Edge-Optimized API Gateway is best for geographically distributed clients. API requests are routed to the nearest CloudFront Edge Location which improves latency. The API Gateway still lives in one AWS Region.
You are running an application in production that is leveraging DynamoDB as its datastore and is experiencing smooth sustained usage. There is a need to make the application run in development mode as well, where it will experience the unpredictable volume of requests. What is the most cost-effective solution that you recommend?
1) Use Provisioned Capacity Mode with AutoScaling enabled for both development and production
2) Use Provisioned Capacity Mode with AutoScaling enabled for production and On-Demand Capacity Mode for development
3) Use Provisioned Capacity Mode with AutoScaling enabled for development and On-Demand Capacity Mode for production
4) Use On-Demand Capacity Mode for both development and production
2) Use Provisioned Capacity Mode with AutoScaling enabled for production and On-Demand Capacity Mode for development
You have an application that is served globally using CloudFront Distribution. You want to authenticate users at the CloudFront Edge Locations instead of authentication requests go all the way to your origins. What should you use to satisfy this requirement?
1) Lambda@Edge
2) API Gateway
3) DynamoDB
4) AWS Global Accelerator
1) Lambda@Edge
Lambda@Edge is a feature of CloudFront that lets you run code closer to your users, which improves performance and reduces latency.
The maximum size of an item in a DynamoDB table is ……………….
1) 1 MB
2) 500 KB
3) 400 KB
4) 400 MB
3) 400 KB
Which AWS service allows you to build Serverless workflows using AWS services (e.g., Lambda) and supports human approval?
1) AWS Lambda
2) Amazon EC2
3) AWS Step Functions
4) AWS Storage Gateway
3) AWS Step Functions
A company has a serverless application on AWS which consists of Lambda, DynamoDB, and Step Functions. In the last month, there are an increase in the number of requests against the application which results in an increase in DynamoDB costs, and requests started to be throttled. After further investigation, it shows that the majority of requests are read requests against some queries in the DynamoDB table. What do you recommend to prevent throttles and reduce costs efficiently?
1) Use an EC2 instance with Redis installed and place it between the Lambda function and DynamoDB table
2) Migrate from DynamoDB to Aurora and use ElastiCache to cache the most requested data
3) Migrate from Dynamo DB to S3 and use CloudFront to cache the most requested data
4) Use DynamoDB Accelerator (DAX) to cache the most requested data
4) Use DynamoDB Accelerator (DAX) to cache the most requested data
You are a DevOps engineer in a football company that has a website that is backed by a DynamoDB table. The table stores viewers’ feedback for football matches. You have been tasked to work with the analytics team to generate reports on the viewers’ feedback. The analytics team wants the data in DynamoDB in json format and hosted in an S3 bucket to start working on it and create the reports. What is the best and most cost-effective way to convert DynamoDB data to json files?
1) Select DynamoDB table then select Export to S3
2) Create a Lambda function to read DynamoDB data, convert them to JSON files, then store files in S3 bucket
3) Use AWS Transfer Family
4) Use AWS DataSync
1) Select DynamoDB table then select Export to S3
A website is currently in the development process and it is going to be hosted on AWS. There is a requirement to store user sessions for users logged in to the website with an automatic expiry and deletion of expired user sessions. Which of the following AWS services are best suited for this use case?
1) Store users’ sessions in an S3 bucket and enable S3 Lifecycle Policy
2) Store users’ sessions locally in an EC2 instance
3) Store users’ sessions in a DynamoDB table and enable TTL
4) Store users’ sessions in an EFS file system
3) Store users’ sessions in a DynamoDB table and enable TTL
You have a mobile application and would like to give your users access to their own personal space in the S3 bucket. How do you achieve that?
1) Generate IAM user credentials for each of your application’s users
2) Use Amazon Cognito Identity Federation
3) Use SAML Identity Federation
4) Use a Bucket Policy to make your bucket public
2) Use Amazon Cognito Identity Federation
Amazon Cognito can be used to federate mobile user accounts and provide them with their own IAM permissions, so they can be able to access their own personal space in the S3 bucket.
You are developing a new web and mobile application that will be hosted on AWS and currently, you are working on developing the login and signup page. The application backend is serverless and you are using Lambda, DynamoDB, and API Gateway. Which of the following is the best and easiest approach to configure the authentication for your backend?
1) Store users’ credentials in a DynamoDB table encrypted using KMS
2) Store users’ credentials in an S3 bucket encrypted using KMS
3) Use Cognito User Pools
4) Store users’ credentials in AWS Secrets Manager
3) Use Cognito User Pools
You are running a mobile application where you want each registered user to upload/download images to/from his own folder in the S3 bucket. Also, you want to give your users to sign-up and sign in using their social media accounts (e.g., Facebook). Which AWS service should you choose?
1) AWS IAM
2) AWS IAM Identity Center
3) Amazon Cognito
4) Amazon CloudFront
3) Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect.
A startup company plans to run its application on AWS. As a solutions architect, the company hired you to design and implement a fully Serverless REST API. Which technology stack do you recommend?
1) API Gateway + AWS Lambda
2) Application Load Balancer + EC2
3) ECS + EBS
4) Amazon CloudFront + S3
1) API Gateway + AWS Lambda
The following AWS services have an out of the box caching feature, EXCEPT ……………..
1) API Gateway
2) Lambda
3) DynamoDB
2) Lambda
You have a lot of static files stored in an S3 bucket that you want to distribute globally to your users. Which AWS service should you use?
1) S3 Cross-Region Replication
2) Amazon CloudFront
3) Amazon Route 53
4) API Gateway
2) Amazon CloudFront
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. This is a perfect use case for Amazon CloudFront.
You have created a DynamoDB table in ap-northeast-1 and would like to make it available in eu-west-1, so you decided to create a DynamoDB Global Table. What needs to be enabled first before you create a DynamoDB Global Table?
1) Dynamo Streams
2) DynamoDB DAX
3) DynamoDB Versioning
4) DynamoDB Backups
1) Dynamo Streams
DynamoDB Streams enable DynamoDB to get a changelog and use that changelog to replicate data across replica tables in other AWS Regions.
You have configured a Lambda function to run each time an item is added to a DynamoDB table using DynamoDB Streams. The function is meant to insert messages into the SQS queue for further long processing jobs. Each time the Lambda function is invoked, it seems able to read from the DynamoDB Stream but it isn’t able to insert the messages into the SQS queue. What do you think the problem is?
1) Lambda can’t be used to insert messages into the SQS queue, use an EC2 instance instead
2) The Lambda Execution IAM Role is missing permissions
3) The Lambda security group must allow outbound access to SQS
4) The SQS security group must be edited to allow AWS Lambda
2) The Lambda Execution IAM Role is missing permissions
You would like to create an architecture for a micro-services application whose sole purpose is to encode videos stored in an S3 bucket and store the encoded videos back into an S3 bucket. You would like to make this micro-services application reliable and has the ability to retry upon failures. Each video may take over 25 minutes to be processed. The services used in the architecture should be asynchronous and should have the capability to be stopped for a day and resume the next day from the videos that haven’t been encoded yet. Which of the following AWS services would you recommend in this scenario?
1) Amazon S3 + AWS Lambda
2) Amazon SNS + Amazon EC2
3) Amazon SQS + Amazon EC2
4) Amazon SQS + AWS Lamda
3) Amazon SQS + Amazon EC2
Amazon SQS allows you to retain messages for days and process them later, while we can take down our EC2 instances.
You are running a photo-sharing website where your images are downloaded from all over the world. Every month you publish a master pack of beautiful mountain images that are over 15 GB in size. The content is currently hosted on an Elastic File System (EFS) file system and distributed by an Application Load Balancer and a set of EC2 instances. Each month, you are experiencing very high traffic which increases the load on your EC2 instances and increases network costs. What do you recommend to reduce EC2 load and network costs without refactoring your website?
1) Hosts the master pack into S3
2) Enable Application Load Balancer Caching
3) Scale up the EC2 instances
4) Create a CloudFront Distribution
4) Create a CloudFront Distribution
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds. Amazon CloudFront can be used in front of an Application Load Balancer.
An AWS service allows you to capture gigabytes of data per second in real-time and deliver these data to multiple consuming applications, with a replay feature.
1) Kinesis Data Streams
2) Amazon S3
3) Amazon MQ
1) Kinesis Data Streams
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. It can continuously capture gigabytes of data per second from hundreds of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
Assume the role of AWS Solutions Architect. Using AWS Auto Scaling, your organization has a well-functioning online application. Customers from all around the world are becoming interested in using the app. However, this has a negative influence on the application’s performance. Your boss wants to know how you can increase the application’s performance and availability. Which of the following Amazon Web Services (AWS) offerings would you suggest?
1) Amazon Web Services DataSync
2) Amazon DynamoDB Accelerator
3) Lake Formation in the AWS System
4) AWS Global Accelerator Program
4) AWS Global Accelerator Program
You’re working on an HPC application with your team. Luster’s high-performance and low-latency file system is required to address complicated, computationally difficult issues. You must set up this file system on AWS at a cheap cost. What’s the best way to do this?
1) Amazon FSx-created Luster file system may be used to store data.
2) Use Amazon EBS to set up a high-performance Cluster file system.
3) EC2 placement group to create a high-speed volume cluster.
4) Start Luster from the AWS Marketplace.
1) Amazon FSx-created Luster file system may be used to store data.
Customers that utilize Amazon FSx’s Luster file system will only be charged for what they really use.
Your website is hosted in an S3 bucket and you have customers from across the world. Caching frequently visited material in an AWS service will minimize latency and boost data transfer speeds. Your decision must be based on one of the following possibilities.
1) Use AWS SDKs to make concurrent queries to Amazon S3 service endpoints horizontally scalable.
2) Create numerous Amazon S3 buckets in the same AWS Region.
3) To better serve customers around the globe, you may enable Cross-Region Replication to several AWS Regions.
4) Set up CloudFront to distribute the S3 bucket’s content.
4) Set up CloudFront to distribute the S3 bucket’s content.
CloudFront is able to cache frequently requested material, resulting in improved speed. The speed of other solutions may be improved, but they do not keep cache for S3 items.
There is an Auto Scaling group for your company’s online game. The app’s traffic is well-known in advance. There is a noticeable rise in traffic on Fridays, which lasts over the weekend, and then begins to decrease on Mondays. The Auto Scalability group’s scaling has to be planned. Which approach is best for implementing a scalability policy?
1) The first step is to create a scheduled CloudWatch event rule that launches and terminates instances every week.
2) The ASG will automatically scale if a target tracking scalping strategy based on the average CPU measure is set.
3) Using the ASG’s Automatic Scaling tab, implement a step scaling policy to automatically scale-out/in at a defined time every week.
4) Create a planned activity in the Auto Scaling group and define the frequency, start and end times as well as the capacity of the action.
4) Create a planned activity in the Auto Scaling group and define the frequency, start and end times as well as the capacity of the action.
AWS EC2 must be used for the deployment of a machine learning application. The application relies heavily on the speed of inter-instance communication, thus you’ve decided to add a network device to the instance in order to boost that speed. What’s the best alternative for increasing output?
1) Assertively, make use of the EC2’s increased networking capabilities.
2) In the instance, configure the Elastic Fabric Adapter (EFA).
3) Assemble an ENI in the instance with high throughput.
4) An Elastic File System (EFS) is created and mounted in a virtual machine (VM).
2) In the instance, configure the Elastic Fabric Adapter (EFA).
EFA is the most suited strategy for boosting High-Performance Computing (HPC) and machine learning applications.
You’re building many EC2 instances for a new app. The EC2 instances must have both low network latency and high network throughput if the application is to function well. A single availability zone should be used for all instances to be deployed. Exactly how would you set this up?
1) Use the Cluster placement technique to start all of the EC2 instances in a placement group.
2) When EC2 instances are launched, automatically assign a public IP address to each of the running instances.
3) Using the Spread placement method, you may start up EC2 instances in an EC2 placement group.
4) The EC2 instances should be launched using an instance type that provides increased networking capabilities wherever possible.
1) Use the Cluster placement technique to start all of the EC2 instances in a placement group.
The Cluster placement technique may increase EC2 instance network performance. When setting up a placement group, you may choose a strategy. When establishing a placement group, you may choose the approach.
You have an S3 bucket where clients may upload images. When an item is uploaded, an event notification containing the object information is delivered to an SQS queue. You also have an ECS cluster that receives messages from the queue and processes them in batches. Depending on the volume of incoming messages and the pace with which the backend processes them, the queue size might fluctuate dramatically. Which measure would you use to increase or decrease the capacity of the ECS cluster?
1) The size of the SQS queue in terms of messages.
2) The ECS cluster’s memory utilization.
3) The total number of items in the S3 bucket.
4) The ECS cluster’s container count.
1) The size of the SQS queue in terms of messages.
Users may set up a CloudWatch alert depending on the number of messages in the SQS queue and use the alarm to tell the ECS cluster to scale up or down.
If you have an existing VPC built, you need to route all traffic from your VPC to AWS S3 buckets across the AWS internal network. S3 bucket traffic is now allowed on the virtual private network (VPC) endpoint that they’ve set up for S3. As part of the application you’re building, you’ll be using VPC to deliver traffic to an AWS S3 bucket. After creating a routing table, you added a route to the VPC endpoint and linked it to your new subnet’s route table. As a result, when you use the AWS CLI to submit an S3 bucket request from EC2, you receive an error message of 403 access forbidden. What may be the problem?
1) Your VPC is located in a separate region from the AWS S3 bucket.
2) Traffic to the S3 prefix list is blocked by EC2 security group outbound rules.
3) S3 bucket may not be available at the VPC endpoint because of a restrictive policy.
4) EC2 instances are not listed as the origin in the S3 bucket’s CORS setup.
3) S3 bucket may not be available at the VPC endpoint because of a restrictive policy.
You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?
1) Use CloudFront Geo Restriction
2) Use Origin Access Control
3) Set up a security group and attach it to your CloudFront Distribution
4) Use a Route 53 Latency record and attach it to CloudFront
1) Use CloudFront Geo Restriction
You have a static website hosted on an S3 bucket. You have created a CloudFront Distribution that points to your S3 bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that?
1) Send an email to your clients and tell them not to use the S3 enpoint
2) Configure your CloudFront Distribution and create an Origin Access Control (OAC), then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution
3) Use S3 Access Points to redirect clients to CloudFront
2) Configure your CloudFront Distribution and create an Origin Access Control (OAC), then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution
What does this S3 bucket policy do?
{ "Version": "2012-10-17", "Id": "Mystery policy", "Statement": [{ "Sid": "What could it be?", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::examplebucket/*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::123456789012:distribution/EDFDVBD6EXAMPLE" } } }] }
1) Forces GetObject request to be encrypted if coming from CloudFront
2) Only allows the S3 bucket content to be accessed from your CloudFront Distribution
3) Only allows GetObject type of request on the S3 bucket from anybody
2) Only allows the S3 bucket content to be accessed from your CloudFront Distribution
A WordPress website is hosted in a set of EC2 instances in an EC2 Auto Scaling Group and fronted by a CloudFront Distribution which is configured to cache the content for 3 days. You have released a new version of the website and want to release it immediately to production without waiting for 3 days for the cached content to be expired. What is the easiest and most efficient way to solve this?
1) Open a support ticket with AWS Support to remove the CloudFront Cache
2) CloudFront Cache Invalidation
3) EC2 Cache Invalidation
2) CloudFront Cache Invalidation
A company is deploying a media-sharing website to AWS. They are going to use CloudFront to deliver the content with low latency to their customers where they are located in both US and Europe only. After a while there a huge costs for CloudFront. Which CloudFront feature allows you to decrease costs by targeting only US and Europe?
1) CloudFront Cache Invalidation
2) CloudFront Price Classes
3) CloudFront Cache Behavior
4) Origin Access Control
2) CloudFront Price Classes
A company is migrating a web application to AWS Cloud and they are going to use a set of EC2 instances in an EC2 Auto Scaling Group. The web application is made of multiple components so they will need a host-based routing feature to route to specific web application components. This web application is used by many customers and therefore the web application must have a static IP address so it can be whitelisted by the customers’ firewalls. As the customers are distributed around the world, the web application must also provide low latency to all customers. Which AWS service can help you to assign a static IP address and provide low latency across the globe?
1) AWS Global Accelerator + Application Load Balancer
2) Amazon CloudFront
3) Network Load Balancer
4) Application Load Balancer
1) AWS Global Accelerator + Application Load Balancer
What is the minimum length of time before you can schedule a KMS key to be deleted?
1) 30 days
2) 7 days
3) 1 day
4) There is no waiting period
2) 7 days
Which AWS service supports automatic rotation of RDS security credentials?
1) S3
2) DynamoDB
3) Parameter Store
4) Secrets Manager
4) Secrets Manager
What would you use Amazon Cognito for?
1) To deploy physical firewall protection across your VPCs via its managed infrastructure (e.g., a physical firewall that is managed by AWS).
2) To provide authentication, authorization, and user management for your web and mobile apps without the need for custom code.
3) To view all your security alerts from services like Amazon GuardDuty, Amazon Inspector, Amazon Macie, and AWS Firewall Manager.
4) To get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements.
2) To provide authentication, authorization, and user management for your web and mobile apps without the need for custom code.
Which of the following is NOT a data source for GuardDuty?
1) CloudTrail logs
2) DNS query logs
3) RDS event history
4) VPC Flow Logs
3) RDS event history
Which Layers does WAF provide protection on?
1) All Layers
2) Layers 3 and 4
3) Layers 3, 4, and 7
4) Layer 7
4) Layer 7
What is the best way to deliver content from an S3 bucket that only allows users to view content for a set period of time?
1) Set a bucket policy to open up the content you need to share.
2) Create a public copy of your data in another S3 bucket.
3) Replicate the S3 data to the requested user’s S3 bucket.
4) Create a presigned URL using S3.
4) Create a presigned URL using S3.
Presigned URLs would allow you to restrict the length of time the content can be viewed
You need a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements. Which service should you use?
1) AWS Artifact
2) AWS Audit Manager
3) Amazon Cognito
4) Amazon Detective
1) AWS Artifact
Artifact is a single source you can visit to get the compliance-related information that matters to you, such as AWS security and compliance reports or select online agreements.
Your boss requires automatic key rotation for your encrypted data. Which AWS service supports this?
1) EBS
2) KMS
3) SQS
4) EC2
2) KMS
To enable In-flight Encryption (In-Transit Encryption), we need to have ……………………
1) an HTTP endpoint with an SSL certificate
2) an HTTPS endpoint with an SSL certificate
3) a TCP endpoint
2) an HTTPS endpoint with an SSL certificate
In-flight Encryption = HTTPS, and HTTPS can not be enabled without an SSL certificate.
Server-Side Encryption means that the data is sent encrypted to the server.
True
False
False
Server-Side Encryption means the server will encrypt the data for us. We don’t need to encrypt it beforehand.