Start Practice Test 6: AWS Certified Cloud Practitioner Practice Exam (6) Flashcards
You have developed a web application targeting a global audience. Which of the following will help you achieve the highest redundancy and fault tolerance from an infrastructure perspective?
- Deploy the application in multiple Availability Zones in a Single AWS Region
- Deploy the application in a single Availability Zone
- There is no need to architect for these capabilities in AWS, as AWS is redundant by default
- Deploy the application in multiple Availability Zones in multiple AWS Regions
Deploy the application in multiple Availability Zones in a Single AWS Region
Explanation
Since you are targeting a global audience, you should leverage AWS global regions to serve content to your users. The deployment option that gives you the highest redundancy is to deploy the application in multiple Availability Zones within multiple AWS regions. This redundancy will also increase the fault tolerance of the application because if there is an outage in a single Availability Zone, the other Availability Zones can handle requests.
Additional information:
It is important to understand that the AWS Cloud infrastructure is built around Regions and Availability Zones (AZs). A Region is a geographical location that contains multiple Availability Zones. Each AWS Region is designed to be completely isolated from the other AWS Regions. This achieves the greatest possible fault tolerance and stability. An Availability Zone is a data center, or data centers, that are completely isolated from the other Availability Zones. Each AWS Region has at least two Availability Zones; most have three. Each Availability Zone is engineered to be independent from failures in other Availability Zones. Deploying your resources across multiple Availability Zones offer you the ability to operate production applications and databases that are more resilient, highly available, and scalable than would be possible from a single data center.
References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf
https://aws.amazon.com/about-aws/global-infrastructure/regions_az/
Which of the following services is used when encrypting EBS volumes?
- AWS KMS
- AWS WAF
- Amazon Macie
- Amazon GuardDuty
AWS WAF
Explanation
Amazon EBS encryption offers a straight-forward encryption solution for your EBS volumes that does not require you to build, maintain, and secure your own key management infrastructure. You can configure Amazon EBS to use the AWS Key Management Service (AWS KMS) to create and control the encryption keys used to encrypt your data. AWS Key Management Service is also integrated with other AWS services including Amazon S3, and Amazon Redshift, to make it simple to encrypt and decrypt your data.
The other options are incorrect:
“Amazon GuardDuty” is incorrect. Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts and workloads. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
“AWS WAF” is incorrect. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
“Amazon Macie” is incorrect. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data stored in Amazon S3. Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with other AWS accounts. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data. Amazon Macie can also be used in combination with other AWS services, such as AWS Step Functions to take automated remediation actions. This can help you meet regulations, such as the General Data Privacy Regulation (GDPR).
References:
https://aws.amazon.com/kms/
https://aws.amazon.com/ebs/faqs/
Which of the following services is an AWS repository management system that allows for storing, versioning, and managing your application code?
- AWS CodePipeline
- AWS X-Ray
- Amazon CodeGuru
- AWS CodeCommit
AWS CodeCommit
Explanation
AWS CodeCommit is designed for software developers who need a secure, reliable, and scalable source control system to store and version their code. In addition, AWS CodeCommit can be used by anyone looking for an easy to use, fully managed data store that is version controlled. For example, IT administrators can use AWS CodeCommit to store their scripts and configurations. Web designers can use AWS CodeCommit to store HTML pages and images.
AWS CodeCommit makes it easy for companies to host secure and highly available private Git repositories. Customers can use AWS CodeCommit to securely store anything from source code to binaries.
The other options are incorrect:
AWS CodePipeline is incorrect. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
AWS X-Ray is incorrect. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization.
Amazon CodeGuru is incorrect. Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identifying an application’s most expensive lines of code.
References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf
What factors determine how you are charged when using AWS Lambda? (Choose TWO)
- Compute time consumed
- Number of volumes
- Placement Groups
- Storage consumed
- Number of request to your functions
**Compute time consumed
Compute time consumed
**
Explanation
With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the time it takes for your code to execute.
The other options are incorrect:
“Placement groups”is incorrect. Placement Groups are logical groupings or clusters of EC2 instances within a single Availability Zone.
“Storage consumed” and “Number of volumes” are incorrect. Lambda is not a storage service. It is a compute service to run your applications.
References:
https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/how-aws-pricing-works.pdf
According to the AWS shared responsibility model, what are the controls that customers fully inherit from AWS? (Choose TWO)
- Awareness and Training
- Resource Configuration Management
- Data Center security Controls
- Environmental controls
- Communication Controles
**Data Center security Controls
Awareness and Training
**
Explanation
AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.
As mentioned in the AWS Shared Responsibility Model page, Inherited Controls are controls which a customer fully inherits from AWS such as physical controls and environmental controls. As a customer deploying an application on AWS infrastructure, you inherit security controls pertaining to the AWS physical, environmental and media protection, and no longer need to provide a detailed description of how you comply with these control families. For example: You have built an application in AWS for customers to securely store their data, but your customers are concerned about the security of the data and ensuring compliance requirements are met. To address this, you assure your customer that “our company does not host customer data in its corporate or remote offices, but rather in AWS data centers that have been certified to meet industry security standards.” That includes physical and environmental controls to secure the data, which is the responsibility of Amazon. Customers of AWS do not have physical access to the AWS data centers, and as such, they fully inherit the physical and environmental security controls from AWS.
You can read more about AWS’ data center controls here:
https://aws.amazon.com/compliance/data-center/controls/
The other options are incorrect:
“Communications controls” is incorrect. Communications controls are the responsibility of the customer.
“Awareness and Training” is incorrect. Awareness and Training belongs to the AWS Shared Controls. AWS trains AWS employees, but a customer must train their own employees.
“Resource Configuration Management” is incorrect. Configuration management belongs to the AWS Shared Controls. AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
References:
https://aws.amazon.com/compliance/shared-responsibility-model/
What are the benefits of the AWS Marketplace service? (Choose TWO)
- Protects customer by performing security checks on listed products
- Provides Cheaper options for purchasing Amazon EC2 on-demand instances
- Provides flexible pricing options that suite most customer needs
- Per-second billing
- Provides software solutions that run on AWS or any other Cloud Vendor
**Protects customer by performing security checks on listed products
Protects customer by performing security checks on listed products
**
Explanation
The AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, and immediately start using the software and services that customers need to build solutions and run their businesses. The AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps. AWS Marketplace is designed for Independent Software Vendors (ISVs), Value-Added Resellers (VARs), and Systems Integrators (SIs) who have software products they want to offer to customers in the cloud. Partners use AWS Marketplace to be up and running in days and offer their software products to customers around the world.
The AWS Marketplace provides value to buyers in several ways:
1- It simplifies software licensing and procurement with flexible pricing options and multiple deployment methods. Flexible pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL.
2- Customers can quickly launch pre-configured software with just a few clicks, and choose software solutions in AMI and SaaS formats, as well as other formats.
3- It ensures that products are scanned periodically for known vulnerabilities, malware, default passwords, and other security-related concerns.
The other options are incorrect:
“Provides cheaper options for purchasing Amazon EC2 on-demand instances” is incorrect. The AWS marketplace cannot be used to buy Amazon EC2 on-demand instances.
“Provides software solutions that run on AWS or any other Cloud vendor” is incorrect. The AWS Marketplace provides software solutions that run on AWS only.
“Per-second billing” is incorrect. The AWS marketplace pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL. Per-second billing is found on AWS resources and services only. It is not found in the marketplace.
References:
https://aws.amazon.com/marketplace
https://docs.aws.amazon.com/marketplace/latest/userguide/what-is-marketplace.html
What should you consider when storing data in Amazon Glacier?
- Pick the right Glacier classs based on your retrieval needs
- Amazo Glacier oonly accepts data in a compressed format
- Attach Glacier to an EC2 instance to be ablt to store data
- Glacier can only be used to stoe frequently accessed data and data archives
Pick the right Glacier classs based on your retrieval needs
Explanation
AWS customers use Amazon Glacier to backup large amounts of data at very low costs. There are three different storage classes for Amazon Glacier: Amazon S3 Glacier Instant Retrieval, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive.
Choosing between S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive depends on how quickly you must retrieve your data. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes. With S3 Glacier Flexible Retrieval, you can retrieve your data within a few minutes to several hours (1-5 minutes to 12 hours), whereas with S3 Glacier Deep Archive, the minimum retrieval period is 12 hours.
For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5 - 12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval from 12 - 48 hours.
The other options are incorrect:
“Amazon Glacier only accepts data in a compressed format” is incorrect. You can store virtually any kind of data in any format. But your costs will be lower if you aggregate and compress your data.
“Attach Glacier to an EC2 Instance to be able to store data” is incorrect. Glacier cannot be attached to EC2 instances. Glacier is a storage class of S3.
The storage service that AWS customers can use to attach storage volumes to an Amazon EC2 instance is Amazon EBS. An Amazon EBS volume is a durable, block-level storage device that you can attach to your EC2 instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. AWS recommends Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for operating systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage.
“Glacier can only be used to store frequently accessed data and data archives” is incorrect. Glacier is not for frequently accessed data.
References:
https://aws.amazon.com/s3/storage-classes/
A financial services company decides to migrate one of its applications to AWS. The application deals with sensitive data, such as credit card information, and must run on a PCI-compliant environment. Which of the following is the company’s responsibility when building a PCI-compliant environment in AWS? (Choose TWO)
- Ensure that all PCI DSS pgysical security requirements are met
- Configure the underlying infrastructure of AWS services to meet all PCI DSS requirements
- Start the miogration process immediately as all AWS service are PCI complaint
- Restrict any access to cardholder data and create a polict that addresses information security for all personel
- Ensure that AWS service are configured properly to meet all PCI DSS standards
Restrict any access to cardholder data and create a polict that addresses information security for all personel
Ensure that AWS service are configured properly to meet all PCI DSS standards
Explanation
The Payment Card Industry Data Security Standard (PCI DSS) helps ensure that companies maintain a secure environment for storing, processing, and transmitting credit card information or sensitive authentication data (SAD). AWS customers who use AWS services to store, process, or transmit cardholder data can rely on AWS infrastructure as they manage their own PCI DSS compliance certification.
Security and compliance are important shared responsibilities between AWS and the customer. It is the customer’s responsibility to maintain their PCI DSS cardholder data environment (CDE) and scope, and be able to demonstrate compliance of all PCI controls, but customers are not alone in this journey. The use of PCI DSS compliant AWS services can facilitate customer compliance, and the AWS Security Assurance Services team can assist customers with additional information specific to demonstrating the PCI DSS compliance of their AWS workloads. AWS Services listed as PCI DSS compliant means that they can be configured by customers to meet their PCI DSS requirements. It does not mean that any use of that service is automatically compliant. A good rule-of-thumb is that if a customer can set a particular configuration, they are responsible for setting it appropriately to meet PCI DSS requirements. AWS customers are also responsible for creating a policy that addresses information security for all personnel, and implementing strong access controls to restrict any access to cardholder data.
The other options are incorrect:
“Ensure that all PCI DSS physical security requirements are met” is incorrect. AWS is responsible for the security and compliance of its physical infrastructure, including the PCI DSS requirements.
“Start the migration process immediately as all AWS services are PCI compliant” is incorrect. Only certain AWS services are in-scope for PCI compliance. You can find a full list of in-scope services here. https://aws.amazon.com/compliance/services-in-scope/
“Configure the underlying infrastructure of AWS services to meet all applicable requirements of PCI DSS” is incorrect. Configuring the underlying infrastructure of AWS services is the responsibility of AWS, not the customer. If a customer is using one of the services that are in-scope for PCI DSS, the entire infrastructure that supports these services is compliant.
References:
https://d1.awsstatic.com/whitepapers/compliance/pci-dss-compliance-on-aws.pdf
https://aws.amazon.com/compliance/shared-responsibility-model/
Which statement is true in relation to security in AWS?
- Server-side encryption is the responibility of AWS
- AWS customers are responsible for patching any database software running on Amazon EC2
- For severless data stores such as Amazon S3, the customer is responsible for patching the operating system
- AWS is responsible for the security of your application
AWS customers are responsible for patching any database software running on Amazon EC2
Explanation
AWS customers have two options to host their databases on AWS:
1- Using a managed database:
AWS Customers can use managed databases such as Amazon RDS to host their databases. In this case, AWS is responsible for performing all database management tasks such as hardware provisioning, patching, setup, configuration, backups, or recovery.
2- Installing a database software on Amazon EC2:
Instead of using a managed database, AWS customers can install any database software they want on Amazon EC2 and host their databases. In this case, Customers are responsible for performing all of the necessary configuration and management tasks.
Note: For Amazon RDS, all security patches and updates are applied automatically to the database software once they are released. But for databases installed on Amazon EC2, customers are required to apply the security patches and the updates manually or use the AWS Systems Manager service to apply them on a scheduled basis (every week, for example).
The other options are incorrect:
“For serverless data stores such as Amazon S3, the customer is responsible for patching the operating system” is incorrect. Amazon S3 is a serverless data store service that stores customer data without requiring management of underlying storage infrastructure. Amazon S3 enables customers to offload the administrative burdens of operating and scaling storage to AWS so that they do not have to worry about hardware provisioning, operating system patching, or maintenance of the platform.
AWS is responsible for most of the configuration and management tasks, but customers are still responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.
Note:
A serverless service is a service that does not require the customer to manage the infrastructure layer, the operating system layer, or the platform layer. A serverless service can be a compute service such as AWS Lambda, an integration service such as Amazon SQS, or a data store service such as Amazon S3.
Read more about serverless services on AWS here:
https://aws.amazon.com/serverless/
“AWS is responsible for the security of your application” is incorrect. It is the responsibility of the customer to build secure applications.
“Server-side encryption is the responsibility of AWS” is incorrect. It is the responsibility of the customer to encrypt data either on the client side or on the server side.
References:
https://aws.amazon.com/compliance/shared-responsibility-model/
How can you protect data stored on Amazon S3 from accidental deletion?
- By enabling S3 Versioning
- By disabling S3 Cross-Region Replication (CRR)
- By configuring S3 Lifecycle Policies
- By configuring S3 Bucket Policies
By enabling S3 Versioning
Explanation
Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning, you can recover more easily from both unintended user actions and application failures.
Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version. Also, If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version.
The other options are incorrect:
“By disabling S3 Cross-Region Replication (CRR)” is incorrect. S3 Cross-Region Replication (CRR) is an Amazon S3 feature that enables customers to replicate data across different AWS Regions; to minimize latency for global users and\or meet compliance requirements. Disabling S3 Cross-Region Replication (CRR) does not help protect data from accidental deletion.
“By configuring S3 lifecycle policies” is incorrect. With S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage classes, or archive or delete them. In order to reduce your Amazon S3 costs, you should create a lifecycle policy to automatically move old (or infrequently accessed) files to less expensive storage tiers, or to automatically delete them after a specified duration. The S3 Lifecycle feature is not meant to protect from accidental deletion of data.
“By configuring S3 Bucket Policies” is incorrect. A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. A Bucket Policy defines who can access a bucket, but does not help if an authorized user accidentally deleted objects in that bucket.
References:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html
Which feature enables users to sign into their AWS accounts with their existing corporate credentials?
- IAM Permissions
- Federation
- WAF Rules
- Access Keys
Federation
Explanation
With Federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory. Federation uses open standards, such as Security Assertion Markup Language 2.0 (SAML), to exchange identity and security information between an identity provider (IdP) and an application.
AWS offers multiple options for federating your identities in AWS:
1- AWS Identity and Access Management (IAM): You can use AWS Identity and Access Management (IAM) to enable users to sign in to their AWS accounts with their existing corporate credentials.
2- AWS IAM Identity Center (Successor to AWS Single Sign-On): AWS IAM Identity Center makes it easy to centrally manage federated access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place.
3- AWS Directory Service: AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, uses secure Windows trusts to enable users to sign in to the AWS Management Console, AWS Command Line Interface (CLI), and Windows applications running on AWS using their existing corporate Microsoft Active Directory credentials.
The other options are incorrect:
“WAF rules” is incorrect. AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that block malicious traffic.
You use WAF rules in a web ACL to block web requests based on criteria like the following:
- Scripts that are likely to be malicious. Attackers embed scripts that can exploit vulnerabilities in web applications. This is known as cross-site scripting (XSS).
- Malicious requests from a set of IP addresses or address ranges.
- SQL code that is likely to be malicious. Attackers try to extract data from your database by embedding malicious SQL code in a web request. This is known as SQL injection.
“IAM Permissions” is incorrect. IAM Permissions let you specify the desired access to AWS resources. Permissions are granted to IAM entities (users, user groups, and roles) and by default these entities start with no permissions. In other words, IAM entities can do nothing in AWS until you grant them your desired permissions.
“Access keys” is incorrect. Access keys are long-term credentials for an AWS IAM user or the AWS account root user. Access keys are not used for signing in to your account. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).
References:
https://aws.amazon.com/identity/federation/
Which of the following AWS Support Plans gives you 24/7 access to Cloud Support Engineers via email & phone? (Choose TWO)
- Enterprise
- Business
- Standard
- Developer
- Premium
** Enterprise
Business **
Explanation
For Technical Support, each of the Business, Enterprise On-Ramp, and Enterprise support plans provides 24x7 phone, email, and chat access to Support Engineers.
The other options are incorrect:
“Premium” and “Standard “are incorrect. Premium and Standard are not valid support plans on AWS.
“Developer” is incorrect. This plan does not include phone support 24/7.
References:
https://aws.amazon.com/premiumsupport/compare-plans/
What is the maximum amount of data that can be stored in S3 in a single AWS account?
- 10 PetaBytes
- 5 TeraBytes
- 100 PetaBytes
- Virtually Unlimited Storage
Virtually Unlimited Storage
Explanation
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.
Which of the following security resources are available to any user for free? (Choose TWO)
- AWS Bulletin
- AWS Support API
- AWS TAM
- AWS Classroom Training
- AWS Security Blog
**AWS Bulletin
AWS Bulletin
**
Explanation
The AWS free security resources include the AWS Security Blog, Whitepapers, AWS Developer Forums, Articles and Tutorials, Training, Security Bulletins, Compliance Resources and Testimonials.
The other options are incorrect.
“AWS Classroom Training” is incorrect. AWS provides live classes (Classroom Training) with accredited AWS instructors who teach you in-demand cloud skills and best practices using a mix of presentations, discussion, and hands-on labs. AWS Classroom Training is not free.
“AWS Support API” is incorrect. AWS Support API is available for AWS customers who have a Business, Enterprise On-Ramp, or Enterprise support plan. The AWS Support API provides programmatic access to AWS Support Center features to create, manage, and close support cases.
“AWS TAM” is incorrect. A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices and proactively keep your AWS environment operationally healthy and secure. TAM is available only for AWS customers who have an Enterprise On-Ramp or Enterprise support plan.
Which of the following actions may reduce Amazon EBS costs? (Choose TWO)
- Deleting unsused Bucket ACL’s
- Using reservations
- Deleting unnecessary snapshots
- Distributing request to multiple volumes
- Changing the type of the volume
Deleting unnecessary snapshots
Changing the type of the volume
Explanation
With Amazon EBS, it is important to keep in mind that you are paying for provisioned capacity and performance, even if the volume is unattached or has very low write activity. To optimize storage performance and costs for Amazon EBS, monitor volumes periodically to identify unattached, underutilized or overutilized volumes, and adjust provisioning to match actual usage.
When you want to reduce the costs of Amazon EBS consider the following:
1- Delete Unattached Amazon EBS Volumes:
An easy way to reduce wasted spend is to find and delete unattached volumes. However, when EC2 instances are stopped or terminated, attached EBS volumes are not automatically deleted and will continue to accrue charges since they are still operating.
2- Resize or Change the EBS Volume Type:
Another way to optimize storage costs is to identify volumes that are underutilized and downsize them or change the volume type.
3- Delete Stale Amazon EBS Snapshots:
If you have a backup policy that takes EBS volume snapshots daily or weekly, you will quickly accumulate snapshots. Check for stale snapshots that are over 30 days old and delete them to reduce storage costs.
The other options are incorrect:
“Deleting unused Bucket ACLs” is incorrect. Amazon EBS does not use buckets. Buckets are used in S3 storage. Amazon S3 Bucket ACLs enable you to manage access to buckets. Each bucket has an ACL attached to it as a subresource. You can use Bucket ACLs to grant basic read/write permissions to other AWS accounts.
Note: You have three options to control access to an Amazon S3 Bucket:
1- IAM Policies
2- Bucket Policies
3- Bucket ACLs
“Distributing requests to multiple volumes” is incorrect. Amazon EBS is a storage service, not a compute service.
“Using reservations” is incorrect. There are no reservations in Amazon EBS independent of Amazon EC2.