PT1 Flashcards

1
Q

A data analytics company wants to use clickstream data for Machine Learning tasks, develop algorithms, and create visualizations and dashboards to support the business stakeholders. Each of these business units works independently and would need real-time access to this clickstream data for their applications.

As a Developer Associate, which of the following AWS services would you recommend such that it provides a highly available and fault-tolerant solution to capture the clickstream events from the source and then provide a simultaneous feed of the data stream to the consumer applications?

A

Correct: Kinesis Data Streams
- enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. provides the ability for multiple applications to consume the same stream concurrently

Incorrect:
Kinesis Data Firehose
- Kinesis Data Firehose is used to load streaming data into data stores
Kinesis Data Analytics
- Kinesis Data Analytics is used to build SQL queries and sophisticated Java applications
SQS
- For SQS, you cannot have the same message being consumed by multiple consumers at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A cyber forensics application, running behind an ALB, wants to analyze patterns for the client IPs.

Which of the following headers can be used for this requirement?

A

Correct: To see the IP address of the client, use the X-Forwarded-For request header.

Incorrect:

  • To determine the protocol used between the client and the load balancer, use the X-Forwarded-Proto request header
  • the X-Forwarded-Port request header helps you identify the destination port that the client used to connect to the load balancer
  • X-Forwarded-IP - This is a made-up option and has been added as a distractor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

DevOps engineers are developing an order processing system where notifications are sent to a department whenever an order is placed for a product. The system also pushes identical notifications of the new order to a processing module that would allow EC2 instances to handle the fulfillment of the order. In the case of processing errors, the messages should be allowed to be re-processed at a later stage and never lost.

Which of the following solutions can be used to address this use-case?

A

Correct: SNS + SQS

Incorrect:
SNS + Kinesis
- the retention day period is 7 days, and processing issues would completely block all future messages.
SNS + Lambda
- your EC2 instances cannot “poll” from Lambda functions and as such, this would not work
SQS + SES
- the messages need to be processed twice (once for sending the notification and later for order fulfillment) and SQS only allows for one consuming application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A popular mobile app retrieves data from an AWS DynamoDB table that was provisioned with read-capacity units (RCU’s) that are evenly shared across four partitions. One of those partitions is receiving more traffic than the other partitions, causing hot partition issues.

What technology will allow you to reduce the read traffic on your AWS DynamoDB table with minimal effort?

A

Correct: Dynamo DAX
a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement

Incorrect:
DynamoDB Streams
- A stream record contains information about a data modification to a single item in a DynamoDB table
ElastiCache
- you will need to modify your code to check the cache before querying the main query store. As the given use-case mandates minimal effort, so this option is not correct
More partitions
- This option has been added as a distractor as DynamoDB handles that for you automatically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The development team at a retail organization wants to allow a Lambda function in its AWS Account A to access a DynamoDB table in another AWS Account B.

As a Developer Associate, which of the following solutions would you recommend for the given use-case?

A

Correct option:
Create an IAM role in account B with access to DynamoDB. Modify the trust policy of the role in Account B to allow the execution role of Lambda to assume this role. Update the Lambda function code to add the AssumeRole API call

Incorrect:
Create a clone of the Lambda function in AWS Account B so that it can access the DynamoDB table in the same account
- Creating a clone of the Lambda function is a distractor as this does not solve the use-case outlined in the problem statement
Add a resource policy to the DynamoDB table in AWS Account B to give access to the Lambda function in Account A
- You cannot attach a resource policy to a DynamoDB table, so this option is incorrect
Create an IAM role in Account B with access to DynamoDB. Modify the trust policy of the execution role in Account A to allow the execution role of Lambda to assume the IAM role in Account B. Update the Lambda function code to add the AssumeRole API call
- As mentioned in the explanation above, you need to modify the trust policy of the IAM role in Account B so that it allows the execution role of Lambda function in account A to assume the IAM role in Account B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your team-mate has configured an Amazon S3 event notification for an S3 bucket that holds sensitive audit data of a firm. As the Team Lead, you are receiving the SNS notifications for every event in this bucket. After validating the event data, you realized that few events are missing.

What could be the reason for this behavior and how to avoid this in the future?

A

Correct:
If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent
- If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent. If you want to ensure that an event notification is sent for every successful write, you can enable versioning on your bucket

Incorrect:
Someone could have created a new notification configuration and that has overridden your existing configuration - It is possible that the configuration can be overridden. But, in the current scenario, the team lead is receiving notifications for most of the events, which nullifies the claim that the configuration is overridden
Versioning is enabled on the S3 bucket and event notifications are getting fired for only one version - This is an incorrect statement. If you want to ensure that an event notification is sent for every successful write, you should enable versioning on your bucket
Your notification action is writing to the same bucket that triggers the notification - If your notification ends up writing to the bucket that triggers the notification, this could cause an execution loop. But it will not result in missing events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You are storing your video files in a separate S3 bucket than your main static website in an S3 bucket. When accessing the video URLs directly the users can view the videos on the browser, but they can’t play the videos while visiting the main website.

What is the root cause of this problem?

A

Correct option:
Enable CORS

Incorrect:
Change the bucket policy - we know that’s not the case because it works using the direct URL but it doesn’t work when you click on a link to access the video
Amend the IAM policy - This scenario refers to public users of a website and they need not have an IAM user account
Disable Server-Side Encryption - Disabling encryption is not an issue because you can access the video directly using an URL but not from the main website

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A large firm stores its static data assets on Amazon S3 buckets. Each service line of the firm has its own AWS account. For a business use case, the Finance department needs to give access to their S3 bucket’s data to the Human Resources department.

Which of the below options is NOT feasible for cross-account access of S3 bucket objects?

A

Correct option:
Use IAM roles and resource-based policies delegate access across accounts within different partitions via programmatic access only - IAM roles and resource-based policies delegate access across accounts only within a single partition.

Incorrect:
Use Resource-based policies and AWS Identity and Access Management (IAM) policies for programmatic-only access to S3 bucket objects - Use bucket policies to manage cross-account control and audit the S3 object’s permissions. If you apply a bucket policy at the bucket level, you can define who can access (Principal element), which objects they can access (Resource element), and how they can access (Action element).
Use Resource-based Access Control List (ACL) and IAM policies for programmatic-only access to S3 bucket objects - Use object ACLs to manage permissions only for specific scenarios and only if ACLs meet your needs better than IAM and S3 bucket policies.Use Cross-account IAM roles for programmatic and console access to S3 bucket objects - Not all AWS services support resource-based policies. This means that you can use cross-account IAM roles to centralize permission management when providing cross-account access to multiple services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A junior developer working on ECS instances terminated a container instance in Amazon Elastic Container Service (Amazon ECS) as per instructions from the team lead. But the container instance continues to appear as a resource in the ECS cluster.

As a Developer Associate, which of the following solutions would you recommend to fix this behavior?

A

Correct option:
You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues - If you terminate a container instance while it is in the STOPPED state, that container instance isn’t automatically removed from the cluster. You will need to deregister your container instance in the STOPPED state by using the Amazon ECS console or AWS Command Line Interface

Incorrect:
You terminated the container instance while it was in RUNNING state, that lead to this synchronization issues - If you terminate a container instance in the RUNNING state, that container instance is automatically removed
The container instance has been terminated with AWS CLI, whereas, for ECS instances, Amazon ECS CLI should be used to avoid any synchronization issues - This is incorrect and has been added as a distractor
A custom software on the container instance could have failed and resulted in the container hanging in an unhealthy state till restarted again - This is an incorrect statement. It is already mentioned in the question that the developer has terminated the instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You work as a developer doing contract work for the government on AWS gov cloud. Your applications use Amazon Simple Queue Service (SQS) for its message queue service. Due to recent hacking attempts, security measures have become stricter and require you to store data in encrypted queues.

Which of the following steps can you take to meet your requirements without making changes to the existing code?

A

Correct option: Enable SQS KMS encryption

Incorrect:
Use the SSL endpoint - The given use-case needs encryption at rest
Use Client-side encryption - will require a code change, so this option is incorrect
*Use Secrets Manager * - Secrets Manager cannot be used for encrypting data at rest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AWS CloudFormation helps model and provision all the cloud infrastructure resources needed for your business.

Which of the following services rely on CloudFormation to provision resources (Select two)?

A

Correct:
AWS Elastic Beanstalk and AWS Serverless Application Model (AWS SAM)

Incorrect:
AWS Lambda - does not need CloudFormation to run
AWS Autoscaling - used CloudFormation but is not a mandatory requirement.
CodeBuild - CodePipeline uses AWS CloudFormation as a deployment action but is not a mandatory service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company that specializes in cloud communications platform as a service allows software developers to programmatically use their services to send and receive text messages. The initial platform did not have a scalable architecture as all components were hosted on one server and should be redesigned for high availability and scalability.

Which of the following options can be used to implement the new architecture? (select two)

A

Correct: ALB + ECS
When you use ECS with a load balancer such as ALB deployed across multiple Availability Zones, it helps provide a scalable and highly available REST API.
API Gateway + Lambda
API Gateway and Lambda help achieve the same purpose integrating some capabilities such as authentication in a serverless fashion, with fully scalable and highly available architectures

Incorrect:
SES + S3 - The combination of these services only provide email and object storage services
CloudWatch + CloudFront - The combination of these services only provide monitoring and fast content delivery network (CDN) services.
EBS + RDS - The combination of these services only provide elastic block storage and database services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your e-commerce company needs to improve its software delivery process and is moving away from the waterfall methodology. You decided that every application should be built using the best CI/CD practices and every application should be packaged and deployed as a Docker container. The Docker images should be stored in ECR and pushed with AWS CodePipeline and AWS CodeBuild.

When you attempt to do this, the last step fails with an authorization issue. What is the most likely issue?

A

Correct option:
The IAM permissions are wrong for the CodeBuild service

Incorrect:
The ECR repository is stale, you must delete and re-create it - You can delete a repository when you are done using it, stale is not a concept within ECR
CodeBuild cannot talk to ECR because of security group issues - A security group acts as a virtual firewall at the instance level and it is not related to pushing Docker images
The ECS instances are misconfigured and must contain additional data in /etc/ecs/ecs.config - The error Authorization is an indication that there is an access issue, therefore you should not look at your configuration first but rather permissions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You team maintains a public API Gateway that is accessed by clients from another domain. Usage has been consistent for the last few months but recently it has more than doubled. As a result, your costs have gone up and would like to prevent other unauthorized domains from accessing your API.

Which of the following actions should you take?

A

Correct option:
Restrict access by using CORS - When your API’s resources receive requests from a domain other than the API’s own domain and you want to restrict servicing these requests, you must disable cross-origin resource sharing (CORS) for selected methods on the resource

Incorrect:
Use Account-level throttling - this is about limit on the number of requests and is not a suitable answer for the current scenario
Use Mapping Templates - Mapping Templates have nothing to do with access and are not useful for the current scenario
Assign a Security Group to your API Gateway - You can restrict IP address using this, the downside being, an IP address can be changed by the accessing user

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A retail company manages its IT infrastructure on AWS Cloud via Elastic Beanstalk. The development team at the company is planning to deploy the next version with MINIMUM application downtime and the ability to rollback quickly in case deployment goes wrong.

As a Developer Associate, which of the following options would you recommend to the development team?

A

Correct option:
Deploy the new version to a separate environment via Blue/Green Deployment, and then swap Route 53 records of the two environments to redirect traffic to the new version

Deploy the new application version using ‘All at once’ deployment policy - Although ‘All at once’ is the quickest deployment method, but the application may become unavailable to users (or have low availability) for a short time
Deploy the new application version using ‘Rolling’ deployment policy - rollback process is via manual redeploy, so it’s not as quick as the Blue/Green deployment
Deploy the new application version using ‘Rolling with additional batch’ deployment policy - rollback process is via manual redeploy, so it’s not as quick as the Blue/Green deployment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A development team at a social media company uses AWS Lambda for its serverless stack on AWS Cloud. For a new deployment, the Team Lead wants to send only a certain portion of the traffic to the new Lambda version. In case the deployment goes wrong, the solution should also support the ability to roll back to a previous version of the Lambda function, with MIMINUM downtime for the application.

As a Developer Associate, which of the following options would you recommend to address this use-case?

A

Correct: Set up the application to use an alias that points to the current version. Deploy the new version of the code and configure the alias to send 10% of the users to this new version. If the deployment goes wrong, reset the alias to point all traffic to the current version

Incorrect:
Set up the application to use an alias that points to the current version. Deploy the new version of the code and configure alias to send all users to this new version. If the deployment goes wrong, reset the alias to point to the current version - does not meet the requirement of sending only a certain portion of the traffic to the new Lambda version
Set up the application to directly deploy the new Lambda version. If the deployment goes wrong, reset the application back to the current version using the version number in the ARN - does not meet the requirement of sending only a certain portion of the traffic to the new Lambda version
Set up the application to have multiple alias of the Lambda function. Deploy the new version of the code. Configure a new alias that points to the current alias of the Lambda function for handling 10% of the traffic. If the deployment goes wrong, reset the new alias to point all traffic to the most recent working alias of the Lambda function - The alias for a Lambda function can only point to a Lambda function version. It cannot point to another alias

17
Q

A company has AWS Lambda functions where each is invoked by other AWS services such as Amazon Kinesis Data Firehose, Amazon API Gateway, Amazon Simple Storage Service, or Amazon CloudWatch Events. What these Lambda functions have in common is that they process heavy workloads such as big data analysis, large file processing, and statistical computations.

What should you do to improve the performance of your AWS Lambda functions without changing your code?

A

Correct option:
Increase the RAM assigned to your Lambda function

Incorrect options:
Change the instance type for your Lambda function - Instance types apply to the EC2 service and not to Lambda function
Change your Lambda function runtime to use Golang - This changes programming language which requires code changes
Increase the Lambda function timeout - may help in case you have some heavy processing, but won’t help with the actual performance of your Lambda function

18
Q

A security company is requiring all developers to perform server-side encryption with customer-provided encryption keys when performing operations in AWS S3. Developers should write software with C# using the AWS SDK and implement the requirement in the PUT, GET, Head, and Copy operations.

Which of the following encryption methods meets this requirement?

A

Correct option:

SSE-C

19
Q

An Auto Scaling group has a maximum capacity of 3, a current capacity of 2, and a scaling policy that adds 3 instances.

When executing this scaling policy, what is the expected outcome?

A

Correct: Amazon EC2 Auto Scaling adds only 1 instance to the group

20
Q

You are getting ready for an event to show off your Alexa skill written in JavaScript. As you are testing your voice activation commands you find that some intents are not invoking as they should and you are struggling to figure out what is happening. You included the following code console.log(JSON.stringify(this.event)) in hopes of getting more details about the request to your Alexa skill.

You would like the logs stored in an Amazon Simple Storage Service (S3) bucket named MyAlexaLog. How do you achieve this?

A

Correct option:
Use CloudWatch integration feature with S3

Incorrect options:
Use CloudWatch integration feature with Kinesis
Use CloudWatch integration feature with Lambda
Use CloudWatch integration feature with Glue

21
Q

Your web application front end consists of 5 EC2 instances behind an Application Load Balancer. You have configured your web application to capture the IP address of the client making requests. When viewing the data captured you notice that every IP address being captured is the same, which also happens to be the IP address of the Application Load Balancer.

What should you do to identify the true IP address of the client?

A

Correct option:
Look into the X-Forwarded-For header in the backend

Incorrect options:

Modify the front-end of the website so that the users send their IP in the requests - no need to modify the application
Look into the X-Forwarded-Proto header in the backend - The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS)
Look into the client’s cookie - For this, we would need to modify the client-side logic and server-side logic

22
Q

A Developer is configuring Amazon EC2 Auto Scaling group to scale dynamically.

Which metric below is NOT part of Target Tracking Scaling Policy?

A

Correct option:
ApproximateNumberOfMessagesVisible - This is a CloudWatch Amazon SQS queue metric. The number of messages in a queue might not change proportionally to the size of the Auto Scaling group

Incorrect:
ASGAverageCPUUtilization - This is a predefined metric for target tracking scaling policy
ASGAverageNetworkOut - This is a predefined metric for target tracking scaling policy. This represents the Average number of bytes sent out on all network interfaces
ALBRequestCountPerTarget - This is a predefined metric for target tracking scaling policy. This represents the Number of requests completed per target in an Application Load Balancer target group

23
Q

The development team at an e-commerce company is preparing for the upcoming Thanksgiving sale. The product manager wants the development team to implement appropriate caching strategy on Amazon ElastiCache to withstand traffic spikes on the website during the sale. A key requirement is to facilitate consistent updates to the product prices and product description, so that the cache never goes out of sync with the backend.

As a Developer Associate, which of the following solutions would you recommend for the given use-case?

A

Correct:
Use a caching strategy to write to the backend first and then invalidate the cache - As the cache gets invalidated, the caching engine would then fetch the latest value from the backend, thereby making sure that the product prices and product description stay consistent with the backend

Incorrect:
Use a caching strategy to update the cache and the backend at the same time - The cache and the backend cannot be updated at the same time via a single atomic operation as these are two separate systems
Use a caching strategy to write to the backend first and wait for the cache to expire via TTL - This strategy could work if the TTL is really short. However, for the duration of this TTL, the cache would be out of sync with the backend
Use a caching strategy to write to the cache directly and sync the backend at a later time - The product prices and description on the cache must always stay consistent with the backend. You cannot sync the backend at a later time.

24
Q

As an AWS Certified Developer Associate, you have been hired to consult with a company that uses the NoSQL database for mobile applications. The developers are using DynamoDB to perform operations such as GetItem but are limited in knowledge. They would like to be more efficient with retrieving some attributes rather than all.

Which of the following recommendations would you provide?

A

Correct option:
Specify a ProjectionExpression: A projection expression is a string that identifies the attributes you want

Incorrect options:
Use a FilterExpression - A filter expression is applied after Query finishes, but before the results are returned
Use the –query parameter - You must provide the name of the partition key attribute and a single value for that attribute. The Query returns all items with that partition key value. Optionally, you can provide a sort key attribute and use a comparison operator to refine the search results
Use a Scan - By default, a Scan operation returns all of the data attributes for every item in the table or index. You can also use the ProjectionExpression parameter so that Scan only returns some of the attributes, rather than all of them

25
Q

You are a DynamoDB developer for an aerospace company that requires you to write 6 objects per second of 4.5KB in size each.

What write capacity unit is needed for your project?

A

Correct:
A write capacity unit represents one write per second, for an item up to 1 KB in size.

Item sizes for writes are rounded up to the next 1 KB multiple. For example, writing a 500-byte item consumes the same throughput as writing a 1 KB item. So, for the given use-case, each object is of size 4.5 KB, which will be rounded up to 5KB.

Therefore, for 6 objects, you need 6x5 = 30 WCUs.

26
Q

Your team has just signed up an year-long contract with a client maintaining a three-tier web application, that needs to be moved to AWS Cloud. The application has steady traffic throughout the day and needs to be on a reliable system with no down-time or access issues. The solution needs to be cost-optimal for this startup.

Which of the following options should you choose?

A

Correct option:
Amazon EC2 Reserved Instances - Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. These On-Demand Instances must match certain attributes, such as instance type and Region, to benefit from the billing discount

Incorrect:
Amazon EC2 Spot Instances - Spot instances are useful if your applications can be interrupted, like data analysis, batch jobs, background processing, and optional tasks. Spot instances can be pulled down anytime without prior notice.
Amazon EC2 On-Demand Instances - You have full control over its lifecycle—you decide when to launch, stop, hibernate, start, reboot, or terminate it. But, On-Demand instances cost a lot more than Reserved instances.
On-premise EC2 instance - On-premise implies the client has to maintain the physical machines, their capacity provisioning and maintenance. Not an option when the client is planning to move to AWS Cloud.

27
Q

An organization with online transaction processing (OLTP) workloads have successfully moved to DynamoDB after having many issues with traditional database systems. However, a few months into production, DynamoDB tables are consistently recording high latency.

As a Developer Associate, which of the following would you suggest to reduce the latency? (Select two)

A

Correct:
Consider using Global tables if your application is accessed by globally distributed users - With global tables, you can specify the AWS Regions where you want the table to be available. This can significantly reduce latency for your users
Use eventually consistent reads in place of strongly consistent reads whenever possible - If your application doesn’t require strongly consistent reads, consider using eventually consistent reads

Incorrect:
Increase the request timeout settings, so the client gets enough time to complete the requests, thereby reducing retries on the system
Reduce connection pooling, which keeps the connections alive even when user requests are not present, thereby, blocking the services - When you’re not making requests, consider having the client send dummy traffic to a DynamoDB table. Alternatively, you can reuse client connections or use connection pooling. All of these techniques keep internal caches warm, which helps keep latency low.
Use DynamoDB Accelerator (DAX) for businesses with heavy write-only workloads - Use DAX for read-heavy workloads

28
Q

You are working for a technology startup building web and mobile applications. You would like to pull Docker images from the ECR repository called demo so you can start running local tests against the latest application version.

Which of the following commands must you run to pull existing Docker images from ECR? (Select two)

A

Correct options:
$(aws ecr get-login –no-include-email)
docker pull 1234567890.dkr.ecr.eu-west-1.amazonaws.com/demo:latest

Incorrect:
docker login -u $AWS_ACCESS_KEY_ID -p $AWS_SECRET_ACCESS_KEY - You cannot login to AWS ECR this way. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are only used by the CLI and not by docker.

aws docker push 1234567890.dkr.ecr.eu-west-1.amazonaws.com/demo:latest - docker push here is the wrong answer, you need to use docker pull.

docker build -t 1234567890.dkr.ecr.eu-west-1.amazonaws.com/demo:latest - This is a docker command that is used to build Docker images from a Dockerfile.

29
Q

Your company leverages Amazon CloudFront to provide content via the internet to customers with low latency. Aside from latency, security is another concern and you are looking for help in enforcing end-to-end connections using HTTPS so that content is protected.

Which of the following options is available for HTTPS in AWS CloudFront?

A

Correct option:
Between clients and CloudFront as well as between CloudFront and backend

Incorrect:
Incorrect options:

Between clients and CloudFront only - This is incorrect as you can choose to require HTTPS between CloudFront and your origin.

Between CloudFront and backend only - This is incorrect as you can choose to require HTTPS between viewers and CloudFront.

Neither between clients and CloudFront nor between CloudFront and backend - This is incorrect as you can choose HTTPS settings both for communication between viewers and CloudFront as well as between CloudFront and your origin.

30
Q

A company has a workload that requires 14,000 consistent IOPS for data that must be durable and secure. The compliance standards of the company state that the data should be secure at every stage of its lifecycle on all of the EBS volumes they use.

Which of the following statements are true regarding data security on EBS?

A

Correct:
EBS volumes support both in-flight encryption and encryption at rest using KMS

Incorrect:
EBS volumes support in-flight encryption but do not support encryption at rest - all data moving between the volume and the instance is encrypted
EBS volumes do not support in-flight encryption but do support encryption at rest using KMS - data at rest is encrypted
EBS volumes don’t support any encryption - This is an incorrect statement.