S3 Flashcards
s3 bucket name
globally unique name
Buckets are defined
region level
key is the FULL path
- s3://my-bucket/my_file.txt
* s3://my-bucket/my_folder1/another_folder/my_file.txt
key is composed of prefix + object name
s3://my-bucket/my_folder1/another_folder/my_file.txt
Amazon S3 - Versioning
bucket level
best practice to version your buckets
Protect against unintended deletes
Easy roll back to the previous version
4 methods of encrypting objects in S3
SSE-S3: encrypts S3 objects using keys handled & managed by AWS
SSE-KMS: leverage AWS Key Management Service to manage encryption keys
SSE-C: when you want to manage your own encryption keys
Client-Side Encryption
SSE-S3
keys handled & managed by Amazon S3
encrypted server-side
“x-amz-server-side-encryption”: “AES256”
SSE-KMS
encryption using keys handled & managed by KMS
KMS Advantages: user control + audit trail
encrypted server-side
“x-amz-server-side-encryption”: ”aws:kms”
SSE-C
server-side encryption
keys fully managed by the customer outside of AWS
S3 does not store the encryption key
HTTPS must be used
Client Side Encryption
Encryption and decryption at client side
HTTPS endpoint
encryption in flight
Block public and cross-account access to buckets and objects
through any public bucket or access point policies
Why this default policy?
These settings were created to prevent company data leaks
Pre-Signed URLs
Very useful for giving temporary access to users.
URLs that are valid only for a limited time (ex: premium video
service for logged in users)
CORS fulfilled using header
CORS Headers (ex: Access-Control-Allow-Origin)
S3 CORS
If a client does a cross-origin request on our S3 bucket, we need to enable the correct CORS headers.
You can allow for a specific origin or for * (all origins)
Cross-Region Replication (CRR)
Use cases: compliance, lower latency access,
replication across accounts
Upload Files in S3 glacier
you must either use the AWS CLI or write code to make requests, by using either the REST API directly or by using the Amazon SDKs
What about the read-write consistency of S3??
Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent.
What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket. This applies to all existing and new S3 objects, works in all regions, and is available to you at no extra charge!
Comment on time limit of s3 presigned urls and its expiry
s3 presigned urls requires you to change your content urls.
The presigned urls expires after its defined expiration date.
IMP TO NOTE -> There is no feature of s3 signed cookies.
CloudFront signed URLs
restrict access to individual files
CloudFront signed cookies
Provide access to multiple signed cookies
S3 consistency model
1) S3 provides a strong read after write consistency for PUTs and DELETEs.
2) Amazon S3 does not support object locking for concurrent writers.
3) If two put requests are made on the same key, the latest put request wins.
4) Bucket configurations have eventual consistency models like deleting a bucket or enabling versioning.
For more reading on it.
Read the AWS docs on the S3 consistency model.
Amazon S3 now provides increased performance
3,500 requests per second to add data and 5,500 requests per second to retrieve data