Section 14: Amazon S3 Security Flashcards

1
Q

About how to encrypt objects in S3 buckets,
Which of the following is fake?

You can encrypt objects in S3 buckets using:
* A) Server Side Encryption (SSE). SSE with S3 Managed Keys (SSE-S3) is enabled by default.
* B) SSE with KMS keys stored in aws kms (SSE-KMS). Leverage aws key management service with manage encryption keys.
* C) SSE with Customer Provided Keys (SSE-C). when you want to manage your own encryption keys
* D) SSE with Certificate authority (CA) Cert. when you want to use a CA to manage encryption keys.
* E) Client side encryption: encrypt everything client side and then upload to S3

A

D is fake. There is no true version of that one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

About SSE-S3. Which, if any, is false and what is true version?

  • A) Server Side Encryption (SSE). SSE with S3 Managed Keys (SSE-S3) is enabled by default for new buckets and new objects.
  • B) encyrption type AES512
  • C) must set header to “x-amz-server-side-encryption”:”AES512”
  • D) user -> upload file -> object under S3 -> s3 pairs it with S3 owned key. perform encryption with s3 object + s3 owned key -> store object in s3 bucket

possibly important exam

A

B and C are false. True versions are:
* B) encyrption type AES256
* C) must set header to “x-amz-server-side-encryption”:”AES256”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

About SSE-KMS. Which, if any, is false (or missing critical information) and what is true version?

  • A) SSE with KMS keys stored in aws kms (SSE-KMS). Leverage aws key management service with manage encryption keys.
  • B) user control + audit key usage using CloudTrail
  • C) Must set header “x-amz-server-side-encryption”:”aws:kms”
  • D) user -> upload -> in s3 (object + aws kms ) -> encryption -> bucket.
  • E) to see the object, you must have access to the object in the S3 bucket.

possibly important exam

A

E is missing info. correct version:
About SSE-KMS. Which, if any, is false (or missing critical information) and what is true version?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

About SSE-KMS Limitations. Which, if any, is false and what is the true version?

  • A) may be impacted by KMS limits
  • B) when you upload, it calls the GenerateDataKey KMS API
  • C) when you download, it calls the Decrypt KMS API
  • D) Count towards the KMS quota per second (5500, 10000, 30000 req/s based on region)
  • E) you can request a quota increase using the Service Quotas Console
  • F) if you have a very high S3 bucket and everything is encrypted using KMS keys you may go into a thread link kind of case (no idea what this means).

possibly important exam

A

All true. About F i think he said that this is something the exam may test you on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

About SSE-C. Which, if any, is false (or missing critical information) and what is true version?

  • SSE with Customer Provided Keys (SSE-C). when you want to manage your own encryption keys

possibly important exam

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

About client side encryption Which, if any, is false (or missing critical information) and what is true version?

  • A) Client side encryption: encrypt everything client side and then upload to S3
  • B) use client libraries such as Amazon S3 Client Side encryption library
  • C) clients must encrypt data before sending to S3
  • D) AWS will send decrypted info when sending info to client
  • E) customer fully manages keys and encryption code

possibly important exam

A

D is false. Client must decrypt data themselves when retrieving data from S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

T/F Encryption in transit (SSL/TLS)
* encryption in flight is also called SSL/TLS
* S3 exposes both HTTP and HTTPS, but HTTPS is reccommended and is mandatory for SSE-C but most people use HTTPS by default now
* how to force encryption in transit

aws:SecureTransport.

A

All T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Is this a correct example of how to force encryption in transit (using HTTPS) for all objects in your S3 bucket?

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Deny”,
“Principal”: “”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::my-bucket/

“Condition”: {
“Bool”: {
“aws:SecureTransport”: “true”
}
}
}
]

}

A

True. It would have been false if

“aws:SecureTransport”: “true”

was set to “false” (as it would be for http?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

rea

T/F

creating your own kms key does cost you some money every month

A

T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Default Encruption
* A_ SSE-S3 applied auotmatically to new upejects stored in S3 (unless you say otherwise)
* B) you can force encryption using a bucket policy and refuse any api call to put an s3 object without encryption headers (SSE-KMS or SSE-C).
* C) Default encryption settings evaluated before bucket policies

A

C is false. bucket policies are evaluated before default encryption settings. (though I don’t know if it’s implying a priority. possibly>)

Subsection: S3 Default Encryption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

T/F CORS
* A) cross origin resource sharing
* B) origin = scheme (protocol) + host (domain) + port
* C) ex: in https://www.example.com the implied port is 443 for HTTPS. The domain is www.example.com and the protocol is HTTPS. And altogether, that makes the origin.
* D) http://example.com/app1 and https://example.com/app2 have the same origin
* E) http://example.com/app1 and http://other.example.com/app2 have different origins (note the different domains)
* F) If two origins are different, requests won’t be fulfilled unless the other origin allows for the requests using CORS Headers (ex: Access-Control-Allow-Origin)

A

D is false. Correct version is:

http://example.com/app1 and http://example.com/app2 have the same origin

In the question, the bottom address used https.

Subsection: CORS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  • Say we’re on firefox. We make an https request to get to https://www.example.com (web server origin). The index.html file that was retreived from https://www.example.com is going to say “hey, i need some images from another site, https://www.other.com.” https://www.other.com is our cross-origin web server. We’re calling it cross origin because it’s like the secondary concern in our attempt to get https://www.example.com to work nicely.
  • So, the web browser we’re on (firefox) has security built in and is going to do a pre-flight request to the cross origin. It (firefox) is going to say “I’d like the options (which apparently means Host (www.other.com) and Origin (https://www.example.com). I might not be understanding if Options is really host and origin, but those values for host and origin are, I think, correct. They just might not be called ‘Options’.
  • Next, our cross origin web server (the one with the picture, https://www.other.com, though things are getting fairly fuzzy now), if it is configured to use Cross Origin Resource Sharing with https://www.example.com, is going to say “Yes, I do allow this origin, https://www.example.com” (which is says by sending back in the pre-flight response header Access-Control-Allow-Origin with a value of https://www.example.com)), to do GET, PUT, DELETE (which it says by sending in the preflight response header Access-Control-Allow-Methods with a value of “GET, PUT, DELETE” ). And, in that situation, those are the CORS headers (Access-Control-Allow-Origin and Access-Control-Allow-Methods)
  • If the web browser is happy with these CORS headers, then the web browser (firefox) gets to make a request to the other server (i think https://www.other.com) to retrieve the pictures that our index.html is waiting for
A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  • if a client makes a cross origin request on our s3 bucket, we need to enable the correct CORS headers.
  • You can allow for a specific origin or use * (for all origins)

popular exam qusetion

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

T/F, assuming everything is set up correctly (static website hosting enabled, block-public-access off, good bucket policy that allows everyone to GET the objects in a bucket), the following CORS configuration should allow your content from site 2 to be read and used by your site 1 (assuming site 1 is named whatever is in the AllowedOrigins list)

[
{
“AllowedHeaders”: [
“Authorization”
],
“AllowedMethods”: [
“GET”
],
“AllowedOrigins”: [
“https://2023-283-tuesday-s3.s3.us-east-2.amazonaws.com/index-with-fetch-and-cors.html”
],
“ExposeHeaders”: [],
“MaxAgeSeconds”: 3000
}
]

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

T/F (and provide corrected versions, if appropriate). If you turn on MFA for an S3 bucket, then MFA is required to
* A) permanently delete an object
* B) suspend versioning on the bucket
* C) enable versioning
* D) list deleted versoining

A

C and D are false. Even if you have MFA enabled for an S3 bucket, you still won’t need to use MFA to do those things.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

T/F (and provide corrected versions, if appropriate). If you turn on MFA for an S3 bucket, then MFA is required to

  • A) Versioning does not have to be enabled on the bucket to use MFA delete
  • B) Anyone with appropriate IAM policies (access to the bucket) can enable/disbable MFA delete
A

Both are false! Here are the correct versions:

  • A) To use MFA Delete, Versioning must be enabled on the bucket
  • B) Only the bucket owner (root account) can enable/disbable MFA delete
17
Q

Would this work if I was non-root? What if I was non-root?

aws s3api put-bucket-versioning –bucket somethingsomething1 –versioning-configuration Status=Enabled,MFADelete=Enabled –mfa “arn:aws:iam::some-real-value 864127” –profile some-real-cli-profile

A

It would work if you were root if you had also set up mfa for your root account. it would not work otherwise. It is, at time of writing, the only known way of enabling MFADelete for an S3 bucket.

18
Q

T/F

If you have MFA delete enabled for a bucket (don’t forget that versioning needs to be on prior to setting up MFA delete) then you can’t actually permanelty delete something using the UI. You have to use something else (aws cli, aws sdk, or s3 rest api (or remove mfa delete ability))

A

True

19
Q

What happened when you tried to enable MFADelete from a non-root account, assuming you set everything up correctly.

A

jack all. got a permissions issue. It really does have to be a root account, root mfa, root access key, cli profile made with the root access key (so it has root permissions).

20
Q

if any are false, what is/are the true version(s)?

S3 Access logs
* A) may want to log all access for audit purpsoes
* B) any request made to s3 will be logged into another s3 bucket
* C) data can be analuzed
* D) target logging bucket can be in any aws region

A

D) is false.

True version: target logging bucket must be in the same aws region as the bucket you want the logs for.

log format https://docs/amazon.com/AmazonS3/latest/dev/LogFormat.html

21
Q

T/F

  • A) you can set up S3 Access Logs on the same bucket for which you want the logging done.
  • B) turning on logging automatically updates your bucket policy
A

A is False. If you do this you will create a logging loop and your bucket will grow exponentially.

S3 Access Logs

22
Q

T/F (one of these is more like a caveat than a true false, which one do you think it is)

  • A) can use s3 console, cli or sdk to generate a presigned url
  • B) url expiration: console (1 min to 720 mins); cli (max of 168 hours)
  • C) users given a pre-signed url that inherit the permissions of the user that generated the URL for the GET / PUT
  • D) Good way to give temp access to one file/object
A

Well, C is true but i suspect it’s missing POST. Steph doesn’t mention it on the slides, but later he does mention that a user can use a presigned url to upload a file, and he doesn’t indicate that the file needs to be merely editing an existing record, so it seems like the upload could be used to create a new record, which would make it a POST, not a PUT (at least according to some definitions of POST. perhaps all of them, idk)

S3 Pre-signed URLS

23
Q

T/F these are good examples of use cases for S3 presigned urls
* A) allow only logged in users to download a premium video from your S3 bucket
* B) allow an ever changing list of users to download files by generating URLs dyamically
* C) allow temporarily a user to upload a file to a precise location in your S3 bucket (that does seem post-y)

A

T

S3 Pre-Signed URL

24
Q

T/F

  • A) S3 Access Points can be used to give a big variety of users a variety of accesses to an S3 bucket without making a big mess of an S3 bucket policy.
A

True

S3 Access Points

25
Q

T/F and provide any true versions, if appropriate.

Say Ive got an S3 bucket /somebucket and i’ve got some prefixes within that bucket

s3://somebucket/finance
s3://somebucket/sales

I could:

  • A) Create an Access Point policy to grant R/W to /finance prefix
  • B) Create an Access Policy to grant R/W to /sales prefix
  • C) Would still need a regular bucket policy though, to grant R, W, or RW permissions to the entire bucket
A

C is false. Correct version is:

  • C) Create an Access Policy to grant R (read) permissions to the entire bucket.

S3 Access points

26
Q

T/F
Each access point has
* it’s own DNS name (Internet Origin or VPC Origin)
* an access point policy (similar to a bucket policy) that allwos you to manage security at scale

A

T

27
Q

S3 Access Point - VPC Origin
* we can define AP to be accessible only from within the VPC
* you must create a VPC endpoint to access the AP (Gateway or Interface Endpoint)
* the VPC Endpoint Policy must allow access to the target bucket and the access point

Given all that, is the following policy missing anything?

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Principal”: “”,
“Action”: “s3:GetObject”,
“Effect”: “Allow”,
“Resource”: “arn:aws:s3:::awsexamplebucket1/

}
]
}

A

It is missing THREE things! Muahahah!
(or four, if you count commas).

Correct version has
* 1. open bracket around the value associated with keyword “Action”,
* 2. open bracket around the value associated with the keyword “Resource”
* 3. A second item in the “Resource” value list, “arn:aws:s3:::us-west-2:1232435345345:accesspoint/example-vpc-ap/object/*”
* 4. A comma between the first and second list items in the “Resource” value.

Completed version looks like this:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Principal”: “”,
“Action”: [
“s3:GetObject”
],
“Effect”: “Allow”,
“Resource”: [ “arn:aws:s3:::awsexamplebucket1/
”,
“arn:aws:s3:::us-west-2:1232435345345:accesspoint/example-vpc-ap/object/*”
]
}
]
}

S3 Access Points

28
Q

T/F
S3 Access Point - Obejct Lambda
* Usually we use AWS Lambda Functions to change the object before it is retrieved by the caller app
* Only on S3 bucket is needed, on top of which we create S3 Access Point and S3 Object Lamda Access Points

A

T

S3 Object Lambda

29
Q

T/F (and provide true versions, if applicable) S3 Object Lamda Function use cases
* A) redacting personally identifiable information for analytics or non prod envs
* B) Converting accross data formats, such as converting XML to JSON
* C) Resizing and watermarking images on the fly using caller-specific details, such as the user who requested the object

A

T

S3 Object Lambda

30
Q

Sum this up.

  • you have an original object that a user on an e commerce application uploads to an S3 bucket. Personally identifiable information is associated with the upload (in or around it, whatever).
  • you have an analytics app that wants access everything in the S3 bucket, but we don’t want to pass the PII to the analytics app. SO we make a supporting S3 Access Point (AP1), connect to that a Lambda Function (LF1) that redacts PII information. Then we create an S3 Object Lambda Acess Point and connect that to the other side of our Redacting Lambda Function. Then our analytics app gets the redacted object.
  • We can also have a customer loyalty database that goes through a lambda function to enrich whatever data is there (add to it somehow). Then the lambda function sends the info off to the S3 Access Point (AP1) that puts the info in an S3 bucket.
  • Then, if a marketing app wants the enriched customer loyalty base data, it goes through a new S3 Object Lambda Access Point (not the one used a couple of bullets up), then goes back through that enriching lambda function (not sure why), goes through AP1 and then gets the data from the S3 bucket
A
  • That is with the analytics app, our S3 bucket gives info to our S3 Access Point which gives the info to our Lambda Function which redacts the PII info then gives the info (though it might be better to say that then the requested info is given back to, but you get the idea) our S3 Object Lambda Access Point (don’t know why this has to be a different kind of access point that the regular S3 Access Point. Perhaps because it’s only for one specific object (in this case, perhaps a redacted csv?))
  • With the marketing app, data goes from our S3 bucket to AP1 to our enriching lambda function (though, again, don’t get why it needs enriching twice, but I suppose I could come up with somehting) (also, to be clear, this is not the same lambda function as the one in the bullet above, though we are talking about the same S3 bucket and the same AP1), through our new S3 Object Lambda Access Point (that is, not the one described above), back to our marketing app.

S3 Object Lambda

31
Q
A

Correct answer and notes:
B) SSE-C. With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.

Incorret answer notes.
* A is incorrect.
Notes: With SSE-S3, the encryption happens in AWS and you the encryption keys managed by AWS. Encryption keys stored in AWS.
* B) is incorrect. Notes: With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.
* D) is incorrect. Notes: With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys.

32
Q
A

correct: B) notes. With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.

incorrect
* A) is incorrect. Notes: With SSE-S3, the encryption happens in AWS and you don’t manage the encryption keys. Encryption keys stored in AWS.
* C) is incorrect. Notes: With SSE-C, the encryption happens in AWS and you have full control over the encryption keys. You could define a rotation policy but this is against the question requirements (encryption keys must be stored and managed by AWS).
* D) is incorrect. Notes: With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You could define a rotation policy but this is against the question requirements (encryption keys must be stored and managed by AWS).

33
Q
A

Correct answer and notes:
D) With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You perform the encryption yourself and send the encrypted data to AWS. AWS does not know your encryption keys and cannot decrypt your data.

Incorrect notes:
* A) With SSE-S3, the encryption happens in AWS and you the encryption keys managed by AWS. Encryption keys stored in AWS.
* B) With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.
* C) With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.

34
Q
A

Correct answer and notes:
D) The cors one. notes: Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. To learn more about CORS, go here: https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

Incorrect answer notes:
* A) If the S3 bucket policy is wrong, the files won’t load in either case.
* B) If the IAM policy was wrong, the files won’t load in either case.
* C) If the encryption was wrong, the files won’t load in either case.

35
Q
A

A) SSE-C (no notes)

Incorrect:
No incorrect answers have notes.

36
Q
A

Correct answer note:
C) Enable MFA delete. Note: MFA Delete forces users to use MFA codes before deleting S3 objects. It’s an extra level of security to prevent accidental deletions.

Incorrect answer notes:
* A) bucket policy. notes: The bucket policy would prevent deletes altogether.
no other answers have incorrect notes.

37
Q
A

Correct answer notes

B) Do nothing and let SSE-S3 handle it. It’ll do it by default Notes: no notes.

Incorrect answer notes:

  • A) Use a bucket policy that enforced HTTPS connect. Notes: This will force in-flight encryption but not at-rest encryption.
    no other incorrect answers have notes.
38
Q
A

Correct answer:
C) Use S3 Access Logs and analyze them using Athena. Notes: S3 Access Logs log all the requests made to S3 buckets and Amazon Athena can then be used to run serverless analytics on top of the log files.

Incorrect answer:
* A) Restrict their IAM policies and look at CloudTrail (cloudtail?) logs. Notes: This will be noticed by employees as they won’t be able to access the files.

  • B) Use a bucket policy. Notes: A bucket policy would prevent access to the user but they would notice it.
39
Q
A

Incorrect answers:
* A) has no notes that weren’t already made note of
* B) S3 presigned URL. Note: S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.
* C) S3 bucket policies. Notes: S3 Bucket Policies cannot help in giving temporary access to a dynamic list of federated users.
* D) IAM Users. Notes: Federated users should not have IAM Users.