S3 Flashcards
A _______ is a container for objects.
bucket
An _______ is a file and any metadata that describes that file
object
Every object in Amazon S3 is stored in a _____
bucket
Bucket name must be unique just within your account. T/F
False. Bucket name must be uniqueglobally/ within a partition - us, china, etc.,
Character length of bucket.
3-63 characters.
bucket name characters conditions.
Only lowercase, numbers, dots and hyphens; no upper case.
Bucket name must begin with ?
lower case alphabet or a number
Bucket name must end with ?
lower case alphabet or a number
Use IP address format for Website buckets. T/F
No. Bucket names must not be of IP address format
Buckets used with ____________ can’t have dots in their names.
S3 Transfer Acceleration.
Best practice is to avoid using dots except for
Buckets used only for static web hosting.
In terms of implementation, buckets and objects are ___________ and we have unique ids called _________
AWS resources and ARN (Amazon Resource Names)
The AWS account that creates a resource owns that resource. T/F
True
If you create an IAM user in your AWS account and grant the user permission to create a bucket, the user can create a bucket. Now, who owns the bucket?
The user does not own the bucket; the AWS account that the user belongs to owns the bucket
If you create an IAM user in your AWS account and grant the user permission to create a bucket, the user can create a bucket. Can the user perform other activities on the bucket?
No. The user needs additional permission from the resource owner to perform any other bucket operations.
Public access is granted to buckets and objects through ____________ and ___________ or both.
access control lists (ACLs), and bucket policies,
_____________ settings can override ACLs and bucket policies
Amazon S3 Block Public Access
You can apply Block Public Access settings to _________buckets in your account.
individual or all buckets
You can configure object-level permissions by configuring _________ specific to that object.
An access control list (ACL)
cors
cross-origin resource sharing
What does cors enable?
You can configure your bucket to allow cross-origin requests.
s3 lifecycle
You can define lifecycle rules for objects in your bucket that have a well-defined lifecycle.
Server access logging S3
Server access logging provides detailed records for the requests that are made to a bucket
What is the default server access logs collection status?
By default, S3 doesn’t collect server access logs.
Where does S3 delivers the server access logs for a target bucket ?
Into another bucket
Can target and source bucket reside in different regions and what must be the default retention period of a source bucket ?
The target and source bucket must reside in the same region. The source bucket must not have any default retention period.
What is retention period?
A retention period specifies a fixed period of time during which an object remains locked.
What happens when a object is locked?
When your object is locked, your object is WORM protected and can’t be over-written or deleted.
What is WORM in object locking?
Write Once Read Many
How many days can you apply for object locking?
You can apply for object locking in days or years with minimum value of 1 day to no max limit.
Example situation of using object locking.
To keep the golden copy of the business records.
Charge for using object locking.
No charge for using object locking.
What are WORM Storage capabilities?
S3 Glacier Vault Lock and S3 Object lock
What is the unique feature of S3 Object lock?
Amazon S3 Object lock is the only cloud object storage WORM capability in the market today that gives you the flexibility to apply retention settings on individual objects in addition to default retention settings for all objects within an S3 bucket
Amazon S3 object lock provides object retention by
- Retention period
2. A legal hold
What is a legal hold ?
A legal hold provides the same protection as retention period, but it has no expiration date. A legal hold remains in place until you explicitly remove it.
Can a object version have a combination of both retention period and legal hold?
Yes. Ex: 1-Year retention period and legal hold.
What retention modes are available with Amazon S3 lock?
- Governance
2. Compliance
What does Governance mode entail?
Governance mode protects objects from being deleted by most of the users during a pre-defined retention period and at the same time some users with special permission will be able to alter the retention settings or delete the objects.
Users with _______ permission can override or remove governance-mode retention settings
s3:BypassGovernanceRetention
Most customers use Governance mode over Compliance mode since
the customers don’t have compliant storage requirements.
What does compliant mode entail ?
With compliance mode, no user, including the root user in AWS account will be able to delete the objects during the pre-defined retention period.
What is the only way to delete the object under Compliance mode within the retention period?
To delete the AWS account it is associated to
When to actually use legal hold
Use legal hold when you aren’t sure how long you would like the objects to stay immutable.
Use compliance mode if you have______ data
compliant.
What is the retention period of legal hold.
The retention period of legal hold is infinite. Only users with special permissions are able to lift the hold.
How does S3 object lock apply for Cross-region replication?
In addition to locking objects, you can configure your S3 buckets to enable automatic, asynchronous copying of locked data and associated metadata to a bucket in a different aws region.
S3 objects can only be locked if the respective bucket is configured to use object lock.
Yes.
Amazon S3 Object lock only works for buckets that have _________ enabled.
Versioning.
Can you turn on AWS S3 Object support for old buckets? If yes, how do you do that?
You can only enable S3 Object lock only to new bucket. To enable object locking for old buckets, contact AWS Support.
Can anyone put objects with object lock into a bucket which is configured with object lock?
No. Only users with special permissions can add immutable objects to the respective bucket.
When a bucket is configured with Object lock and retention mode, does it apply to all the objects that are placed in the bucket ?
Yes.
Can I change retention mode and period for a specific object ?
Yes.
Object Lock settings can’t be specified on _____ using the S3 console
upload
When you create a bucket with Object Lock enabled, Amazon S3 automatically enables ______ for the bucket.
versioning
If you create a bucket with Object Lock enabled, you can’t _____________
disable Object Lock or suspend versioning for the bucket.
If you want to automatically protect object versions that are placed in the bucket, you can configure a ______.
default retention period
If you want to enforce the bucket default retention mode and period for all new object versions placed in a bucket, then ________
set the bucket defaults and deny users permission to configure object retention settings.
Bucket default settings require both a ________ and _______
mode and a period
After the object is created, its retention period is ________ from the bucket’s default retention period.
independent
f you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the ___________
Content-MD5 header.
By enabling server access logging, S3 console will automatically update your _________ to include access to the ________
bucket access control list (ACL), S3 log delivery group.
Server access logs don’t record information about ____________
wrong-region redirect errors for Regions that launched after March 20, 2019.
How are S3 logs delivered?
Amazon S3 periodically collects access records, consolidates the records in log files, and then uploads the log files to your target bucket as log objects.
Which account does S3 use to write access logs?
Log Delivery Group
Server access log records are delivered on a _________
best effort basis
Most log records are delivered within _______
a few hours of the time that they are recorded, but they can be delivered more frequently.
The completeness and timeliness of server logging is guaranteed. T/F
False. Possibility of missing logs, rare but possible.
If you change the target bucket for logging from bucket A to bucket B, logs for the next hour might continue to be delivered to
Some logs for the next hour might continue to be delivered to bucket A, while others might be delivered to the new target bucket B.
What is Canonical userid
An alpha-numeric identifier, such as 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
When is canonical userid used?
You can use this ID to identify an AWS account when granting cross-account access to buckets and objects using Amazon S3.
Amazon S3 event notifications are designed to be delivered _________
at least once.
What events can AWS publish notifications for S3
- New object created events
- Object removal events
- Restore object events
- Reduced Redundancy Storage (RRS) objects lost events
- Replication events.
Amazon S3 sends event notifications for replication configurations that have _________ enabled.
S3 replication metrics or S3 Replication Time Control (S3 RTC)
Amazon S3 supports __________ destinations where it can publish events
- Amazon Simple Notification Service (Amazon SNS)
- Amazon Simple Queue Service (Amazon SQS) queue
- AWS Lambda
What is Amazon Simple Notification Service (Amazon SNS)
Amazon SNS is a flexible, fully managed push messaging service using which, you can push messages to mobile devices or distributed services.
Main feature of SNS
With SNS you can publish a message once, and deliver it one or more times. Pub-Sub
What version of SNS is currently allowed and not allowed in S3
Currently Standard SNS is only allowed as an S3 event notification destination, whereas SNS FIFO is not allowed.
What is Amazon Simple Queue Service (Amazon SQS) queue ?
Amazon SQS offers reliable and scalable hosted queues for storing messages as they travel between computers. You can use SQS to transmit any volume of data without requiring other services to be always available.
What version of SQS is currently allowed and not allowed in S3
Currently, Standard SQS queue is only allowed as an Amazon S3 event notification destination, whereas FIFO SQS queue is not allowed.
What causes an execution loop?
If your notification writes to the same bucket that triggers the notification, it could cause an execution loop.
To send S3 notifications to SNS, you must configure a _______ on SNS’s end where you can write to notifications to.
Topic
In which region must be the SNS topic to which the S3 notifications are sent to.
The SNS topic must be in the same region as the S3 bucket, which notifications it delivers.
Before you use the AWS SNS topic as event notification destination, what information do we need ?
- ARN of SNS topic
2. A valid SNS topic subscription.( The topic subscribers are notified when a message is published to your SNS topic.)
In which region must be the SQS queue to which the S3 notifications are sent to.
The Amazon SQS queue must be in the same Region as your Amazon S3 bucket.
Before we can use the SQS queue as the event notification destination, what information do we need ?
ARN of the SQS Queue
To grant Amazon S3 permissions to publish messages to the SNS topic or SQS queue, you attach an __________ to the destination SNS topic or SQS queue.
AWS Identity and Access Management (IAM) policy
Enabling notifications is a ______ level operation.
bucket
You store notification configuration information in the _______ associated with a bucket.
notification subresource
After creating or changing the bucket notification configuration, typically you must wait ______ for the changes to take effect.
5 minutes
Amazon S3 stores the notification configuration as ____ in the_______associated with a bucket.
XML , notification subresource
By default, to which events are the notifications enabled?
By default the notifications are not enabled for any type of event.
What configuration is stored in the notification subresource by default and why?
By default the notification subresource is empty since the notifications are not enabled for any type of events by default.
Price of SQS
Free. All customers can make 1 million Amazon SQS requests for free each month.
Price to setup SNS
No upfront costs.
SNS charges
SNS charges you based on the number of messages you publish, the number of messages you deliver, and any additional API calls for managing topics and subscriptions.
The notification message that Amazon S3 sends to publish an event is in the ______ format.
JSON
What is Managing your storage lifecycle
To manage your objects so that they are stored cost effectively throughout their lifecycle
What is 3 Lifecycle configuration
An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects.
What are the types of actions defined by the S3 lifecycle configuration?
- Transition actions
2. Expiration actions
What are transition actions?
Transition actions define when an object transition from one storage class to another storage class.
What are expiration actions?
Expiration actions define when objects expire. S3 deletes expired objects on your behalf.
Each object in Amazon S3 has a ______ class associated with it.
storage
Which storage classes offer high durability?
All storage classes offer high durability.
What storage classes does AWS offer for Frequently accessed data
- S3 Standard
2. Reduced Redundancy
What is S3 Standard storage class?
S3 standard is the default storage class. When you don’t specify any storage class, your object is associated with S3 Standard storage class.
What is reduced redundancy storage class?
Reduced Redundancy Storage class or RRS is used for non-critical, reproducible data that can be stored with less redundancy than the S3 Standard class.
Which Frequently accessed storage class does AWS doesn’t recommend.
Reduced Redundancy Storage.
Which is the most cost effective frequently-accessed data storage class
S3 Standard
What is the loss percent of objects associated to RRS class?
0.01
If a RRS object is lost and if requests are made to that object then ________
s3 returns 405 error
What is S3 Intelligent tiering?
S3 Intelligent Tiering helps to optimize storage costs by automatically moving the data to the most cost effective access tier with no operational overhead.
When access patterns change, _________ moves data on a granular object level between access tiers for cost savings.
S3 Intelligent Tiering.
_____________ is a perfect storage class to optimize cost for unknown or changing access patterns.
S3 Intelligent Tiering.
What is the (additional?) fee the user is subject to when using S3 Intelligent tiering
Small monthly object monitoring and automation fee.
Objects that are uploaded or transitioned to S3 Intelligent-Tiering are automatically stored in the _______ tier.
Frequent Access
How does Intelligent tiering move object from Frequent access tier to in-frequent access tier?
S3 Intelligent tiering monitors the access patterns and then moving the objects that have not been accessed in the last 30 consecutive days to infrequent access tier.
After you activate one or both of the archive access tiers, S3 Intelligent-Tiering automatically moves objects that haven’t been accessed for _________ to the Archive Access tier, and after ______ of no access, to the Deep Archive Access tier.
90 consecutive days, 180 consecutive days
To access objects that have moved to the Intelligent-Tiering Archive Access tier, you must move them back to the _______ which can take _______
Frequent Access tier , 3 - 5 hours, Standard option and not expedited option.
Is expedited option available to move Intelligent Tiering Archive Access tier to Frequent Access Tier ?
Yes
To access objects that have moved to the Intelligent-Tiering Deep Archive Access tier, you must restore them back to the ________ which can take up to _____
Frequent Access tier, 12 hours
Is expedited option available to move Intelligent Tiering Deep Archive Access tier to Frequent Access Tier ?
No
The S3 Intelligent-Tiering storage class is suitable for objects ______ KB that you plan to store for at least _____
larger than 128KB; 30 days
If the size of an object is less than 128 KB, it is not eligible for _________
auto-tiering.
Smaller objects, less than 128 KB can be stored, but they are always charged at the _______rates in the S3 Intelligent-Tiering storage class.
frequent access tier
If you delete an object before the end of the 30-day minimum storage duration period, you are charged for ________
30 days.
The __________ storage classes are designed for long-lived and infrequently accessed data.
S3 Standard-IA and S3 One Zone-IA
S3 Standard-IA and S3 One Zone-IA objects are available for _________ (similar to the_____________).
millisecond access; S3 Standard storage class
Amazon S3 charges a retrieval fee for __________ objects, so they are most suitable for ____ accessed data.
S3 Standard IA, One zone IA ;infrequently
Best use case for Standard-IA and One Zone IA
- For storing backups
2. For data that is accessed infrequently but still needs to be accessed with millisecond frequency
Difference between Standard-IA and One Zone-IA
Standard-IA stores in object data redundantly in separate AZs where as One Zone IA stores data only in one AZ. That is why it is less expensive, less resillient and less available than the S3 Standard-IA.
When to use S3 Standard-IA
Use for primary or only copy of data that cannot be recreated.
When to use One Zone-IA
Use to store the data that can be re-created if an AZ fails or for Object replicas when setting S3-Cross Region Replication.
Storage classes for low-cost archiving objects
S3 Glacier or S3 Glacier Deep Archive.
Is the durability and resilience of Glacier and Glacier Deep Archive objects less than Standard Storage?
No. It provides the same durability and resilience as S3 Standard Storage. Its just the retrieval time that is different.
With expedited retrieval option, Glacier archive data can be accessed in __________
1-5 minutes
If you are data is stored in S3 Glacier and you either deleted the data or moved the data different storage class before the 90-day minimum, you are charged for ________
90 days
Minimum storage duration for data stored in Glacier deep archive is
180 days.
Retrieval time for Glacier Deep Archive data is ______
12 Hours.
What is lowest Archive storage class
S3 Glacier Deep Archive Storage.
You can lower the cost of the Glacier Deep Archive retrieval costs by using ___________ and it takes ________
Bulk retrieval; 48 hours
Which storage classes are available for real time retrieval
S3 Standard, S3 Standard-IA, One Zone-IA, RRS and Intelligent Tiering.
Data from which storage classes are not available for real-time access?
Glacier and Glacier Deep Archive
What are AWS Outposts
AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience.
The ___________ is only available for objects stored in buckets on AWS Outposts.
S3 Outposts storage class;
If you try to use this S3 Outposts Storage class with an S3 bucket in an AWS Region, it results in an __________error.
InvalidStorageClass
If you try to use other S3 storage classes with S3 on ________ , it results in this InvalidStorageClass error response.
Outposts
Objects stored in the __________storage class are always encrypted using _______________
S3 Outposts (OUTPOSTS) ; server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
Which storage classes have the most durability?
- S3 Standard
- S3 Standard-IA
- One Zone-IA
- Intelligent Tiering
- Glacier
- Glacier Deep Archive
with 99.999999999%
Which storage classes have the least durability?
- Reduced Redundancy Storage
99. 99%
What Storage classes have the most availability?
- S3 Standard
- Glacier (After the objects are restored)
- Glacier Deep Archive (After the objects are restored)
- Reduced Redundancy Storage.
with 99.99% availability
What storage classes have the least availability
- One-Zone IA
with 99.50%
Which storage classes have 99.90% availability?
- S3 Standard-IA
2. Intelligent Tiering
Availability Zone
Except for One Zone IA, all other have availability zones of >=3 zone
Which storage classes have no minimum storage duration?
- S3 Standard
2. RRS
Minimum of 30 Days storage duration?
- S3 Standard-ia
- One Zone-IA
- Intelligent Tiering
Mimmum storage duration for S3 Glacier and S3 Glacier Storage Deep Archive
90 days and 180 days
All of the storage classes except for ________ are designed to be resilient to simultaneous complete data loss in a single Availability Zone and partial loss in another Availability Zone.
S3 One Zone-IA
To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their ____________
Amazon S3 Lifecycle.
What are the Supported lifecycle transitions?
- S3 Standard Storage class to any other storage class.
- Any storage class to S3 Glacier or S3 Glacier Deep Archive.
- S3 Standard-IA storage class to Intelligent Tiering class or One Zone -IA class.
- Intelligent Storage class to One Zone-IA class
- S3 Glacier Storage class to S3 Glacier Deep Archive Storage class
What are Unsupported lifecycle transitions?
- Any storage class to S3 Standard Storage class
- Any storage class to Reduced Redundancy storage class.
- S3 Intelligent Tiering class to S3 Standard-IA class
- S3 One Zone-IA to S3 Standard-IA or to S3 Intelligent Tiering classes.
Which transitions have a cost benefit of transitioning of large objects?
- From S3 Standard or S3 Standard-IA to Intelligent Tiering.
- S3 Standard to S3 Standard-IA or One Zone IA
Which transitions doesn’t transition objects less than 128 KB because they are not cost effective?
- S3 Standard or S3 Standard-IA to Intelligent Tiering.
2. S3 Standard storage class to S3 Standard-IA or One Zone-IA
Amazon S3 doesn’t transition objects within the___________ because _________
first 30 days; newer objects are often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA storage.
How are encrypted objects treated during the transition process?
Encrypted objects remain encrypted throughout the transition process.
How are archive objects retrieved and accessed?
- Before an archive object is accessed, you must create a temporary copy of the object.
- The restored object copy is only available only for the duration you specify in the restore request.
- After that AWS S3 deletes the temporary copy and the object remains archive in S3 Glacier.
How do you restore an object from Glacier Archive?
- You can only transition an object from Glacier Archive to Glacier Deep Archive.
- To transition an object from Glacier Archive to a different storage class, you first retrieve a copy of the archived object using the retrieve operation.
- Then use the copy operation to overwrite the object specifying the S3 Standard, Standard-IA, One Zone IA, Inteliigent Tiering or RRS as the storage class.
How to transition from Glacier Deep Archive?
You cannot use S3 Lifecycle Configuration rules for transition from Glacier Deep Archive to any other storage class. For the transition to happen, the transition must happen through a temporary copy creation.
Storage overhead charges for archiving objects
When objects are transitioned to Glacier or Glacier Deep Archive, a fixed amount of storage is added to accommodate to manage metadata for managing the object.
1. For each object transitioned to Glacier or Glacier Deep Archive, S3 uses 8KB of storage for the name of the object and other metadata. S3 uses this metadata to get the real time list of the objects archived in Glacier and Glacier Deep Archive.
2. For each object that is archived in Glacier and Glacier Deep Archive, S3 adds 32 KB of storage for index and related metadata which helps to identify and restore the object.
You are charged for S3 Glacier and S3 Glacier Deep Archive rates for this additional storage.
If you are archiving large number of small objects, consider aggregating the small objects into Large objects.
Number of days you plan to keep the objects archived?
For Glacier it is 90 days and Glacier Deep Archive it is 180 days. If you plan on deleting the objects after the minimum period, it is free. However if you delete before the minimum duration, aws still charges you a prorated deleted fee.
What are the transition request charges for archiving objects?
For each object that is transitioned to Glacier and Glacier Deep archive, constitutes one transition request.
There is a cost for each such request.
If you plan on transitioning large number of objects, consider the transition costs and try aggregating the small objects to larger objects.
Life cycle management - transition to Glacier and Glacier Deep Archive cost considerations.
- Storage Overhead Charges
- Number of days you plan on keeping the objects archived.
- S3 Glacier and S3 Glacier Deep Archive transition request charges.
- S3 Glacier and S3 Glacier Deep Archive data restore charges.
How are S3 objects archived using Lifecycle management.
When you archive objects to S3 Glacier by using Life Cycle Management, S3 transitions each object asynchronously. There might be a delay from when the transition is initiated and when the object is physically transitioned.
What is propagation delay?
When you add an Life cycle configuration to a bucket, there is usually some lag before new or updated lifecycle configuration is fully propagated to all the S3 systems. Expect a delay of minutes.
Lifecycle configuration for __________ buckets is not supported.
MFA-enabled.
Which buckets can lifecycle configurations be applied? Version or Unversioned
Both versioned and unversioned.
How are Amazon S3 Lifecycle actions captured by CloudTrail?
Amazon S3 Lifecycle actions are not captured by CloudTrail.
How are Amazon S3 Lifecycle actions captured in logs?
Server Access Logs.
Why does CloudTrail doesn’t capture Lifecycle actions?
CloudTrail captures API requests made to external end points whereas Lifecycle actions are performed using internal S3 endpoints.
What is Amazon S3 Inventory?
Amazon S3 Inventory is one of the tools provided by S3 to manage your storage. You can use it to audit and report on the replication and encryption status of your objects.
AWS S3 inventory provides ______ format files
CSV, ORC, Parquet
AWS S3 inventory provides file on a ________ basis for an ___________
Daily or Weekly; S3 bucket.
Source and destination buckets in AWS S3 Inventory
Source buckets - The bucket for which the inventory is generated.
Destination bucket - The bucket to which the inventory for source bucket is written to
You can use __________ to query Amazon S3 Inventory.
Athena
What is replication in AWS S3?
Replication enables automatic, asynchronous copying of S3 objects across S3 buckets.
What is replicated by default?
By default, replication only supports copying new S3 objects after it is enabled.
To replicate the existing objects and replicate them to a different bucket, you must ___________
Contact AWS support center
To enable S3 replication, the following configuration must be provided:
- Destination bucket/s where you want to replicate the S3 objects.
- IAM Role that S3 can assume to replicate the objects on your behalf.
How to create a duplicate version while maintaining the original metadata?
Replicate the S3 object.
Can you replicate an object to a different storage class?
Yes
What is owner override?
While replicating the object, you can change the ownership of the object to a different aws account.
For compliance requirements, if you would like to create a copy of the object in a different region without modifying the original metadata?
Replication
What is S3 RTC
AWS S3 RTC stands for Replication Time Control that is used to replicate your data within your AWS region or across different regions in a predictable time frame.
What is the SLA of S3 RTC for replication.
S3 RTC replicates 99.99% of the new objects stored in S3 within 15 minutes.
When to use Cross region replication?
- Meet compliance requirements.
- Minimize latency
- Increase operational efficiency.
When to use same region replication? SRR
- Aggregate logs into a single bucket.
- Configure live replication between the production and test accounts.
- Abide by data sovereignty laws.
What are the requirements for replication - either cross origin or same region.
- The source bucket owner must have the source and destination AWS regions enabled for their account. The destination bucket owner must have the destination region enabled for their account.
- Both Source and destination buckets must have version enabled.
- Amazon S3 must have permissions to replicate the objects from source bucket and destination bucket.
- If the owner of the source bucket doesn’t own the object , the object owner must grant bucket owner READ and READ_ACP with object Access Control List
- If the object has Object Lock enabled, the destination bucket must also have the S3 Object Lock enabled.
Additional replication configuration in cross-account scenario where source and destination buckets are owned by different AWS accounts
- The owner of the destination bucket must grant the owner of the source bucket permissions to replicate the bucket policy.
- The destination bucket cannot be configured as Requestor Pays bucket
What is or can be replicated in S3
- Objects created after you add replication configuration.
- Unencrypted objects.
- Objects encrypted at rest under AWS S3 Managed Keys or Customer Managed Keys stores in KMS.
- Object metadata from the source objects to replicas.
- Only objects in the source bucket for which the bucket owner has permissions to read object and access control lists.
- Object ACL updates, unless you direct Amazon S3 to change the replica ownership when source and destination buckets aren’t owned by the same accounts.
- Object tags, if any
- S3 Object Lock retention information
In general, _______ pay for all Amazon S3 storage and data transfer costs that are associated with their bucket
bucket owners
With requestor pays buckets, _________ pays for storing the data
Bucket owners
If you enable Requester Pays on a bucket, __________ to that bucket is not allowed.
anonymous access
Requester Pays buckets do not support the following:
- Anonymous requests
- SOAP requests
- Using a Requester Pays bucket as the target bucket for end-user logging, or vice versa. However, you can turn on end-user logging on a Requester Pays bucket where the target bucket is not a Requester Pays bucket.
The requester pays for the data transfer and the request, and the bucket owner pays for the data storage. However, the bucket owner is charged for the request under the following conditions:
- The requester doesn’t include the parameter x-amz-request-payer in the header (GET, HEAD, or POST) or as a parameter (REST) in the request (HTTP code 403).
- Request authentication fails (HTTP code 403).
- The request is anonymous (HTTP code 403).
- The request is a SOAP request.
What is Transfer Acceleration?
Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.
Transfer Acceleration takes advantage of the_____________. As the data arrives at an edge location, the data is______________
globally distributed edge locations in Amazon CloudFront.
routed to Amazon S3 over an optimized network path.
You might want to use Transfer Acceleration on a bucket for various reasons:
- Your customers upload to a centralized bucket from all over the world.
- You transfer gigabytes to terabytes of data on a regular basis across continents.
- You can’t use all of your available bandwidth over the internet when uploading to Amazon S3
Bucket ownership rule
An Amazon S3 bucket is owned by the AWS account that created it. Bucket ownership is not transferable to another account.
Can you change the bucket name or region after a bucket is created?
No
Default number of buckets allowed in aws account
100, with a max limit of 1000 buckets upon request
Number of objects that can be stored in a bucket
No Maximum limit
The ___________ uniquely identifies the object in an Amazon S3 bucket.
object key (or key name)
_________ is a set of name-value pairs
Object metadata
Length of object keys in bytes
1024 bytes
The Amazon S3 data model is a ___________
flat structure
There are two kinds of metadata in Amazon S3
system-defined metadata and user-defined metadata.
System-defined object metadata
For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes this system metadata as needed. For example, Amazon S3 maintains object creation date and size metadata and uses this information as part of object management.
User-defined object metadata
When uploading an object, you can also assign metadata to the object. You provide this optional information as a name-value (key-value) pair when you send a PUT or POST request to create the object.
The maximum size of a file that you can upload by using the Amazon S3 console is __________
160 GB.
To upload a file larger than 160 GB, use the _____________
AWS CLI, AWS SDK, or Amazon S3 REST API.
Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI
With a single PUT operation, you can upload a single object up to 5 GB in size.
Upload a single object using the Amazon S3 Console
With the Amazon S3 Console, you can upload a single object up to 160 GB in size.
Upload an object in parts using the AWS SDKs, REST API, or AWS CLI
Using the multipart upload API, you can upload a single large object, up to 5 TB in size.
Later Multipart upload, which data cannot be replicated, Monitoring, security, best practices.
Read later