Amazon S3 Security Flashcards
How many methods of encryption are there for S3?
4
Sever-Side encryption with Amazon S3-Managed Keys (SSE-S3) - Default
encrypts objects using keys handled, managed, and owned by AWS
Server-Side Encryption with KMS keys stored in AWS KMS (SSE-KMS)
Leverage AWS Key Management Service (AWS KMS) to manage encryption keys
Server-Side Encryption with Customer-Provided Keys (SSE-C)
When you want to manage your own encryption keys
Client Side Encryption
This happens on the client server
How does S3 Encryption - SSE S3 work
Encryption using keys handled, managed, owned by AWS
Server side encryption
Type is AES 256
header must include “x-amz-server-side-encryption”:”AES256”
enabled by default
Amazon S3 Encryption - SSE-S3 architecture
User uploads HTTP(S) + Header to Amazon S3.
The object is paired with S3 Owned Key, and is encrypted moving to the encrypted S3 Bucket
Advantages of SSE-KMS?
keys are handled by AWS KMS
there is user control & audit key usage using CloudTrail (logs everything that happens)
Object is encrypted server side
header = x-amz-server-side-encryption”:”awsLkms”
SSE-KMS encryption architecture
User uploads HTTP(S) & KMS header into Amazon S3.
Object is created and is combined with KMS key from AWS KMS. Then it gets encrypted and moved to the S3 Bucket
SSE-KMS Limitations
When you uplaod it calls GenerateDataKey KMS API.
When downloading it calls Decrypt KMS API.
This means that you are quoted and charged for each call.
Can result in throttling use case
SSE-KMS API calls architecture
User upload/download SEE-KMS.
S3 Bucket calls API KMS Key
SSE-C encryption
Server-Side encryption using keys managed by customer outside AWS.
Amazon S3 does NOT store keys you provide
MUST use HTTPS
Encryption key must provide in HTTP headers for every HTTP request made
SSE-C architecture
User uploads file and key (HTTP ONLY & Key in header).
Amazon S3 uses Client-provided key and object, encrypts it and then moves it to S3 bucket.
To read it you need the key provided.
S3 Client-Side Encryption
use client libraries such as Amazon S3 Client-Side Encryption Library
clients must encrypt data before sending to S3
also must decrypt data when retrieving from S3
Customer fully manages keys and encryption cycle
Client-Side Encryption Architecture
File & Client key are encrypted. Then the encrypted file is uploaded via HTTP(S) to Amazon S3 Bucket
Encryption in Transit (SSL/TLS)
HTTPS recommended when using amazon S3
And HTTPS mandatory for SSE-C
How do you force encryption in transit? aws:SecureTransport
Add Bucket Policy onto S3 Bucket that says “aws:SecureTransport”: “false”
Blocks all HTTP traffic and allows only HTTPS
Default Encryption vs Bucket Policies
SSE-S3 is automatically applied to new objects
You can also Force encryption using bucket policies and refuse any API call to PUT an S3 object without encryption headers (SSE-KMS or SSE-C)
Are Bucket Policies evaluated before “Default Encryption”?
Yes
What does CORS mean?
Cross-Origin Resource Sharing (CORS)
What is Origin in CORS?
scheme (protocol) + host (domain) + port
e.g. https://www.example.com = port is 443 for HTTPS, 80 for HTTP)
What is CORS? technical
Web Browser based mechanism to allow requests to other origins while visiting the main origin
same origin = http://example.com/app1
&
http://example.com/app2
Give an example of same origin CORS
same origin = http://example.com/app1
&
http://example.com/app2
Give an example of different origin CORS
http://www.example.com
&
http://other.example.com
What happens if you do not use CORS Headers?
The requests won’t be fulfilled unless the other origin allows for the requests.
e.g. Access-Control-Allow-Origin
CORS Architecture
Web server (origin) https://www.example.com
Web Browser
Web Server (Cross-Origin) https://www.other.com
Web browser requests via HTTPS information on origin web server. The Origin Web server tells the web browser they can have this information but the other information is on the cross origin server.
The web browser sends preflight request to cross-origin server saying I want this information, you are the host, and this is the origin server.
Cross-origin server says, origin server is allowed to use methods GET, PUT, DELETE in a preflight response.
Then the Web Server will make a request to the Cross Origin server and make these calls.
How does CORS apply to S3?
If client makes cross-origin request on our S3 bucket, we need to enable correct CORS headers.
What can you do with origins in CORS?
Can allow for specific origin or * all origins
CORS on S3 architecture
Web browser sends GET/index.html request to host bucket.
Host bucket sends index.html to web browser.
In that index.html there is an image called coffee.jpg which is hosted on the other S3 bucket.
To use that, the web browser makes a GET/images/coffee.jpg request to the other bucket saying, you are the target host of this image, and the origin for the request is the other S3 bucket.
If CORS headers are configured correctly, then the second S3 bucket will do an Access-Control-Allow-Origin and allow the Web Browser to retrieve the coffee.jpg image and display it.
What is MFA Delete?
When you need to delete an object version permanently or suspend versioning on the bucket you will be asked for an MFA code
What do you need to have to use MFA delete?
Bucket Versioning must be enabled
Who can enable/disable MFA delete?
Only the root account (bucket owner)
S3 Access Logs
Any request from any acc, auth or denied, gets logged into another S3 bucket
What can you do with S3 logs?
analyse them with tools such as athena
Where does the target logging bucket must be?
same AWS region
What should you never do with S3 logs?
Do not set your logging bucket to be the monitored bucket.
if you do you will create a logging loop
Pre Signed URLs
Generate pre-signed urls using S3 Console, AWS CLI or SDK
URL expiration
S3 console = 1 - 720 mins (12 hours)
AWS CLI - 3600 604800 sec (168 hours)
What is the purpose of a pre-signed url?
Users given a pre-signed URL inherit the permissions of the user that generated the URL for GET /PUT
Pre-Signed URL use case (architecture)
You have a private S3 bucket and want to share a file within that bucket.
You generate pre-signed URL, send that URL to the user, and they can use that URL to download only that specific file
S3 Glacier Vault Lock
Adopt a WORM (Write Once Read Many) model
How do you create S3 Glacier Vault Lock?
Create a Vault Lock Policy
What happens if you lock the policy for future edits?
Can no longer be changed or deleted
When is a Glacier Vault Lock helpful?
For compliance and data retention
S3 Object Lock (different than S3 Glacier Vault Lock)
WORM (Write Once Read Many) model
Blocks an object version deletion for a specified amount of time
What is Retention Mode Compliance in S3 Object Lock?
object versions cannot be overwritten or deleted by any user, including root user
object retention modes cant be changed and retention periods cant be shortened
What is Retention Mode Governance in S3 Object Lock?
most users cant overwrite or delete an obj version or alter its lock settings
some users have special permissions to change the retention or delete the object
What is a retention period
protect the object for a fixed period, it can be extended
What is Legal Hold
protect the object indefinitely, independent of retention period
can be freely placed and removed using S3:PutObjectLegalHold IAM permission
S3 Access Points
Create access points for each S3 location and assign a Policy for Read/Write to it. Then connect the group of users you want to use that access point.
For example Finance users will access that access point and only be able to read/write in the S3 bucket /finance folder.
Why do we need access points?
To simplify security management for S3 Buckets
What do each access point have?
Its own DNS (Internet Origin or VPC origin)
Access point policy
similar to bucket policy - manage security at scale
Access Points within VPC Origin
EC2 instances talks to VPC endpoint (has an endpoint policy) and outside the VPC there is an access point VPC Origin with an access point policy attached. Then that talks to the S3 bucket which also has a bucket policy
Basically define access point to be accessible only from within the VPC
What do you need to create to have access the Access the Access point (gateway or interface endpoint) within a VPC?
VPC Endpoint
What does the VPC endpoint policy must do to have access to the target bucket and access point?
Policy must be set to allow
S3 Object Lambda
Write code and invoke Lambda to change an object after it was taken from the bucket and before the caller application receives it