Section 11: Amazon S3 Introduction Flashcards

1
Q

True/False
Amazon S3 is infinitely scaling storage, and also since so much of the internet uses it, it’s probably a good choice if you want to use an AWS resources that has a ton of online tutorials/instructions

A

True

Subsection: S3 Overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • Backup and storage
  • Disaster Recovery
  • archiving
  • hybrid cloud storage
  • app hosting
  • media hosting (video/images/etc)
  • data and data analytics
  • software delivery & static website
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Amazon S3 Buckets
*Amazon S3 alllows people to store objects (like files) in “buckets” (directories)
* must have globally unique name (across the whole internet, everyone using S3)
* buckets defined at region level
* S3 looks like a global service but bickets are created in a region
* naming convention: No uppercase, no underscore, 3-63 characters long, not an IP, must start with a lowercase letter or number, must not start with “xn–”, must not end with “-s3alias”

A

S3 Overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Amazon S3 Objects
* OBjects (files) have a key, and the key is the full path (s3://my-bucket/my-child- bucket/my_file.txt)
* key is a prefix + object name. So in s3://my-bucket/my-child- bucket/my_file.txt the prefix is my-bucket/my-child- bucket/ and the object name is my_file.txt.
* there aren’t really directories, though it migjht look like it. they’re all buckets
* max object size is 5TB (5000GB)
* if uploading a file of more thatn 5GB, must use multi part upload
* metadata (key value pairs)
* tags (unicode key/value pairs - up to 10 useful for security lifecycle)
* version ID (if versioning is enabled)

A

S3 Overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

S3 objects are not public unless you explicitly make them public. presigned urls

A

True. You can view your own objects using the presigned URL (which you can find if you click the “Open” button on your s3 object UI)

S3 Overview (slash S3 Hands On)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  • User-Based: IAM policies - which API calls should be allowed for a specific user from IAM
  • Resource-Based: Bucket Policies - bucket wide rules from the S3 console - allows cross account
  • Object Access Control List (ACL) - finer grain (can be disabled)
  • Bucket access Control List (ACL) - less common (can be disabled)
  • Note: an IAM principal can access an S3 object if ((the user IAM permissions ALLOW it OR the resource policy ALLOWs it) AND (there’s no explicity deny))
  • Encrypt objects using encryption keys
A

True

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

S3 Bucket Policies - JSON based policies
* Resources: buckets and objects
* Effect: Allow/Deny
* Actions: set of API to allow or deny
* principal: account or user to apply the policy to

A

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Use S3 bucket for policy to:
* grant public access to bucket
* force objects to be encrypted at upload
* grant access to another account (cross account)

A

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

might rework this to make it less memory dependant.

S3 Bucket Policy example. Describe the resources this applies to (2), the “Effect” options, a definition of the “Action” set and a definition of “Principal”
{“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “PublicRead”,
“Effect”: “Allow”,
“Principal”, “”,
“Action”: [
“s3:GetObject”
],’
“Resource”: [
“arn:aws:s3:::examplebucket/

]
}
]
}

A

S3 Bucket Policies - JSON based policies
* Resources: buckets and objects
* Effect: Allow/Deny
* Actions: set of API to allow or deny
* principal: account or user to apply the policy to

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

True/False. If false, what’s the correct version?

You use a Bucket Policy for Public Access. To do this you give your S3 Bucket an S3 Bucket Policy that allows public access and then anyone on the internet can visit your s3 bucket.

A

True

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

True/False. If false, what’s the correct version?

You can give a user access to an S3 bucket using IAM permissions. To do this, you give an IAM User an IAM Policy that allows access to your S3 bucket.

A

True

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True/False. If false, what’s the correct version?

You can give an EC2 instance access to your S3 bucket by using IAM Roles. To do this, you create for your EC2 Instance an EC2 Instance Role and attach IAM permissions to your EC2 Instance Role that allow your EC2 instance to access your S3 bucket.

A

True

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

True/False. If false, what’s the correct version?

You can give another IAM User on a different AWS account access to your S3 bucket. This is called Cross-Account Access (not sure that’s an official AWS term) and you provide it using an S3 Bucket Policy. To do this, you create/update your S3 Bucket’s S3 Bucket Policy to allow Cross Account.

A

True

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

True/False. If false, what’s the correct version?

You can create Bucket settings to block all public access by default. This “Block all public access” feature blocks public access to buckets and objects granted through new and/or any access control lists (ACLs), or granted through new and/or any public bucket or access point policies.

This was created to prevent company data leaks. Can be set at the account level. Leave these on if your bucket should never be public

A

True

S3 Security: Bucket Policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You created a bucket policy allowing everyone to see your coffee.jpg, but it doens’t seem to be working, what’s wrong?

{
“Version”: “2012-10-17”,
“Id”: “Policy1696976642467”,
“Statement”: [
{
“Sid”: “Stmt1696976641525”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::2023-283-tuesday-s3”
}
]
}

A

Your resource needs to end in “/”. The “/” means that your bucket policy applies to everything inside the bucket arn:aws:s3:::2023-283-tuesday-s3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

if you get a 403 Forbidden error on yrou static website hosted through S3, what might be the issue with what your bucket policy is allowing?

A

Your bucket policy might not be allowing public reads!

S3 website hosting

17
Q

S3 can host static websites and have them accessible on the Internet.
not that important, but depending on the region your URL might be

http://bucket-name.s3-website-aws-region.aws.com or http://bucket-name.s3-website.aws-region.aws.com

A
18
Q

Versioning S3
* you can version files in S3
* enabled at bucket level
* same will overwrite will change the version
* best practice to version buckets
* any file not versions prior to enabling version will have version “null”
* suspecding versoining does not delete previous versions (you can delete versions by hand though)

A

True

19
Q

if you toggle Show Versions so it’s off and delete an object that isn’t versioned, then it might look like you deleted the object, but if you turn SHow Versions back on you’ll see that your object just has a deleted marker and you can get it back.

A

True

20
Q

Amazon S3 Replication CRR & SRR
* must enable versoining in both source and destination buckets
* cross region replication
* same region replication
* buckets can be in different AWS accounts
* copying is asynchronous
* must give proper IAM permissions to S3
* use cases for CRR: compliants - lower latency access, replication accross accounts
* use cases for SRR - log aggregation, live replication between production and test accounts

A

True

21
Q

Replication contd
* after you enable replication, only new objects are replicated
* if you want to replicate existing objects you have to use S3 Batch Replication (which will replicate existing objects and objects that failed replication)
* For delete operations you can replicate delete markers from source to target (optinal) and deletions with a version ID are not replicated (to avoid malicious deletes)
* there is no “Chaining” of replication: if bucket 1 has replication into bucket 2, which has replication into bucket 3 the objects created in bucket 1 are not replciated into bucket 3

A

true

S3 Replication Notes

22
Q

S3 Replication Management
* Let’s say you have two S3 buckets: mybucket_1 and mybucket_2. You want everything you do to mybucket_1 (with some caveats that we’ll cover later) to happen to mybucket_2.
* 1) In which bucket do you create the replication rule?
* 2) Does creating a replication rule in mybucket_1 so that changes get replicated in mybucket_2 mean that if you upload a file to mybucket_2 it’ll get copied into mybucket_1? As in, if I permanently delete something from mybucket_2, will it get permanently deleted from mybucket_1?
* 3) Say you toggled versioning on in mybucket_1 and deleted beach.jpg. Does beach.jpg get deleted in mybucket_2?
* 4) Say you did not toggle versioning (default is off, so we’re saying it’s still off) on mybucket_1 and deleted beach.jpg. Does beach.jpg get deleted in mybucket_2?

A

1) In mybucket_1
2) no
3) it does NOT. As a result of toggling versioning on, when you delete something you’re permanently deleting it. it would not matter if you’d enabled delete marker
3) It does,
4) if you enabled delete marker replication in your mybucket_1 replication rule then yes, the “delete” (which is not the same as permanently delete because versioning is off) will also happen to beach.jpg in mybucket_2. Both will end up with a delete marker, which you’d only see if you turned on versioning after all this was done.

S3 Replication Hands On

23
Q

S3 Storage Classes:
* S3 Standard - General Purpose
* S3 Standard - Infrequent Access
* S3 One Zone-Infrequent Access
* S3 Glacier Instant Retrival
* S3 Glacier Flexible Retrieval
* S3 Glacier Deep Archive
* S3 Intelligent Tiering
* Can move between classes manually or using S3 Lifecycle configurations

A

True

S3 Storage Classes Overview

24
Q

S3 Durability and Availability
* Durability: How many times an object is going to be lost by AWS S3. S3 is highly durable. If you store 10 million objects on S3, you can expect to lose a single object once every 10 thousand years (same or all storage classes)
* Availability: how readily available a service is. varies on storage class. S3 standard has 99.99% availability, which means it’s not available 53 minutes a year.

A

S3 Storage Classes Overview

25
Q

S3 Standard Gen Purpose

  • 99.99% availability
  • used for frequently accessed data
  • low latency and high throughput
  • sustain 2 concurrent facility failures
  • use cases: big data analytics, mobile & gaming applicatoins, content distribution
A

S3 Storage Classes Overview

25
Q

S3 - Infrequent Access
* for data less frequenly accessed, but requires rapid access when needed
* lower cost than S3 standard
* S3 Standard Infrequent Access (S3 Standard-IA): 99.9% Availability. Use cases: Disaster Recovery, backups
* S3 One Zone-Infrequent Access (S3 One Zone IA)): High Durability (99.999999999%) in a single AZ, data lost when AZ is destroyed; 99.5% Availability; use cases: store secondary backup copies of on premise data, or data you can recreate

A

S3 Storage Classes Overview

26
Q

S3 Glacier STorage Class
* Low-cost object storage meant for archiving / backup
* pricing: price for stage + object retrieval cost
* AMazon S3 Glacier Instant Retrieval: millisecond retrieval, great for data accessed once a quarter; min storage duration of 90 days
* Amazon S3 Glacier FLexible Retrieval (formerly S3 Glacier): expedited (1-5 mins), Standard (3-5 hours), Bulk (5-12 hours) - free; min storage duration of 90 days
* Amazon S3 Glacier Deep Archive - for long term storage: Standard (12 hours), Bulk (48 hours); lowest cost of glaciers; min duration of 90 days

A

S3 Storage Classes Overview

27
Q

S3 Intelligent - Tiering
* small monthly monitoring and auto-tiering fee
* moves objects automatically between access tiers based on usage
* there are not retrieval charges in S3 Inteligent Tiering
— THe list:
* Frequent Access Tier (automatic): default tier
* Infrequent Access Tier (automatic): objects not accessed for 90 days
* Archive Instant Access Tier (automatic): objects not accessed for 90 days
* Archive Access Tier (optional): configurable from 90 days to 700+ days
* Deep Archive Access tier (optional): configurable from 180 days to 700+ days

A

S3 Storage Classes Overview

28
Q
A

Correct answer:

C) Use Multi part uploading when uploading files larger than 5GB.
Note: Multi-Part Upload is recommended as soon as the file is over 100 MB.

Incorrect answer notes:
A) The file size limit on S3 is 5 TB.
B & D have no other notes.

S3 Storage Classes Overview

29
Q
A
30
Q
A

B) S3 Bucket names must be globally unique and dev is already taken.

no answers have notes.

31
Q
A

D) null

no answers have notes.

32
Q
A

Corret answer notes:
C) The IAM user must have an explicit DENY in the attached IAM policy. NOTES: Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.

Incorrect answer notes:
B) The User is lacking permissions: Even if the user doesn’t have permissions, in the question the S3 bucket is explicitly allowing the user and that’s enough from a permission’s perspective.

NO OTHER ANSWERS HAVE NOTES.

33
Q
A

Correct answer notes:
B) S3 Replication. Notes: S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region.

Incorrect answer notes:
A) Amazon CloudFront Distributions - Amazon CloudFront caches your data in different Edge Locations across the world, but it won’t copy the content of your S3 bucket to a different AWS Region.
C) S3 Versioning: S3 Versioning allows you to keep different versions of a file in case of overwrites.
D) no notes

34
Q
A

Correct: A) Configure replication from A to B and then configure replication from A to C

Incorrect: B & C are wrong because “S3 Replication doesn’t support “chaining” of replication.

35
Q
A

Correct: B) Expedited.

no answers have notes.

36
Q
A

Correct: D) Instant (10 seconds)

no answers have notes.