AWS S3 Flashcards
You have a 25 GB file that you’re trying to upload to S3 but you’re getting errors. What is a possible solution for this?
A) The file size limit on S3 is 5GB
B) Update the bucket policy to allow the larger file
C) Use Multi-Part upload when uploading files larger than 5 GB
D) Encrypt the file
C
Multi-Part Upload is recommended as soon as the file is over 100 MB.
You’re getting errors while trying to create a new S3 bucket named “dev”. You’re using a new AWS Account with no S3 buckets created before. What is a possible cause for this?
A) You’re missing IAM permissions to create an S3 bucket
B) S3 bucket names must be globally unique and “dev” is already taken
B
You have updated an S3 bucket policy to allow IAM users to read/write files in the S3 bucket, but one of the users complain that he can’t perform a PutObject API call. What is a possible cause for this?
A) The S3 bucket policy must be wrong
B) The user is lacking permissions
C) The IAM user must have an explicit DENY in the attached IAM policy
D) You need to contact AWS support to lift this limit
C
Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.
You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have?
A) 1
B) 0
C) -1
D) null
D
You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use?
A) Amazon CloudFront distributions
B) S3 Versioning
C) S3 Static Website Hosting
D) S3 Replication
D
S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region.
You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this?
A) Configure replication from bucket A to B, then from A to C
B) Configure replication from bucket A to B, then from B to C
C) Configure replication from bucket A to C, then from C to B
A
Which of the following is NOT a Glacier Deep Archive retrieval mode?
A) Expedited (1 - 5 minutes
B) Standard (12 hours)
C) Bulk (48 hours)
A
Which of the following is NOT a Glacier Flexible retrieval mode?
A) Instant (10 seconds)
B) Expedited ( 1 - 5 minutes)
C) Standard (3 - 5 hours)
D) Bulk (5 - 12 hours)
A
How can you be notified when there’s an object uploaded to your S3 bucket?
A) S3 Select
B) S3 Access Logs
C) S3 Event Notifications
D) S3 Analytics
C
You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions?
A) S3 Lifecycle Rules - Transition Actions
B) S3 Lifecycle Rules - Expiration Actions
C) S3 Access Logs
B
How can you automate the transition of S3 objects between their different tiers?
A) AWS Lambda
B) Cloudwatch Events
C) S3 Lifecycle Rules
C
While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using these unfinished parts and they cost you money. What is the best approach to remove these unfinished parts?
A) Use AWS Lambda to loop on each old/unfinished part and delete them
B) Request AWS Support to help you delete old/unfinished parts
C) Use an S3 Lifecycle Policy to automate old/unfinished part deletion
C
You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers?
A) S3 Inventory
B) S3 Analytics
C) S3 Lifecycle Rules Advisor
B
You are looking to build an index of your files in S3, using Amazon RDS PostgreSQL. To build this index, it is necessary to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, amounting to 50 TB of data. How can you build this index efficiently?
A) Use the RDS Import feature to load the data from S3 to PostgresSQL and run a SQL query to build the index
B) Create an application that will traverse the S3 bucket, read all the files one by one, extract the first 250 bytes and store that information in RDS
C) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS
D) Create an application that will traverse the S3 bucket, issue a S3 Select for the first 250 bytes, and store that information in RDS
C
You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10GB files. You have good bandwidth but your internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the internet connection.
A) Use Multi-part Upload only
B) Use S3 Select & Use S3 Transfer Acceleration
C) Use S3 Multi-part Upload & S3 Transfer Acceleration
C