AWS S3 Flashcards
Prepare for S3 - related questions in AWS Certified Solutions Architect Associate Exam
In what ways does Amazon Simple Storage Service (Amazon S3) object storage differ from block and file storage? (Choose 2 answers)
A. Amazon S3 stores data in fixed size blocks.
B. Objects are identified by a numbered address.
C. Objects can be any size.
D. Objects contain both data and metadata.
E. Objects are stored in buckets.
D, E. Objects are stored in buckets, and objects contain both data and metadata.
Which of the following are not appropriates use cases for Amazon Simple Storage Service
A. Storing web content
B. Storing a file system mounted to an Amazon Elastic Compute Cloud (Amazon EC2)
instance
C. Storing backups for a relational database
D. Primary storage for a database
E. Storing logs for analytics
(Amazon S3)? (Choose 2 answers)
B, D. Amazon S3 cannot be mounted to an Amazon EC2 instance like a file system and
should not serve as primary database storage.
What are some of the key characteristics of Amazon Simple Storage Service (Amazon S3)? (Choose 3 answers)
A. All objects have a URL.
B. Amazon S3 can store unlimited amounts of data.
C. Objects are world-readable by default.
D. Amazon S3 uses a REST (Representational State Transfer) Application Program Interface (API).
E. You must pre-allocate the storage in a bucket.
A, B, D. C and E are incorrect—objects are private by default, and storage in a bucket does
not need to be pre-allocated.
Which features can be used to restrict access to Amazon Simple Storage Service (Amazon S3) data? (Choose 3 answers)
A. Enable static website hosting on the bucket.
B. Create a pre-signed URL for an object.
C. Use an Amazon S3 Access Control List (ACL) on a bucket or object.
D. Use a lifecycle policy.
E. Use an Amazon S3 bucket policy
B, C, E. Static website hosting does not restrict data access, and neither does an Amazon
S3 lifecycle policy.
Your application stores critical data in Amazon Simple Storage Service (Amazon S3), which must be protected against inadvertent or intentional deletion. How can this data be protected? (Choose 2 answers)
A. Use cross-region replication to copy data to another bucket automatically.
B. Set a vault lock.
C. Enable versioning on the bucket.
D. Use a lifecycle policy to migrate data to Amazon Glacier.
E. Enable MFA Delete on the bucket.
C, E. Versioning protects data against inadvertent or intentional deletion by storing all
versions of the object, and MFA Delete requires a one-time code from a Multi-Factor
Authentication (MFA) device to delete objects. Cross-region replication and migration to
the Amazon Glacier storage class do not protect against deletion. Vault locks are a
feature of Amazon Glacier, not a feature of Amazon S3.
Your company stores documents in Amazon Simple Storage Service (Amazon S3), but it wants to minimize cost. Most documents are used actively for only about a month, then much less frequently. However, all data needs to be available within minutes when requested. How can you meet these requirements?
A. Migrate the data to Amazon S3 Reduced Redundancy Storage (RRS) after 30 days.
B. Migrate the data to Amazon Glacier after 30 days.
C. Migrate the data to Amazon S3 Standard – Infrequent Access (IA) after 30 days.
D. Turn on versioning, then migrate the older version to Amazon Glacier.
C. Migrating the data to Amazon S3 Standard-IA after 30 days using a lifecycle policy is
correct. Amazon S3 RRS should only be used for easily replicated data, not critical data.
Migration to Amazon Glacier might minimize storage costs if retrievals are infrequent,
but documents would not be available in minutes when needed.
How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability?
A. Data is automatically replicated to other regions.
B. Data is automatically replicated within a region.
C. Data is replicated only if versioning is enabled on the bucket.
D. Data is automatically backed up on tape and restored if needed.
B. Data is automatically replicated within a region. Replication to other regions and
versioning are optional. Amazon S3 data is not backed up to tape.
Based on the following Amazon Simple Storage Service (Amazon S3) URL, which one of the following statements is correct? https://bucket1.abc.com.s3.amazonaws.com/folderx/myfile.doc
A. The object “myfile.doc” is stored in the folder “folderx” in the bucket “bucket1.abc.com.”
B. The object “myfile.doc” is stored in the bucket “bucket1.abc.com.”
C. The object “folderx/myfile.doc” is stored in the bucket “bucket1.abc.com.”
D. The object “myfile.doc” is stored in the bucket “bucket1.”
C. In a URL, the bucket name precedes the string “s3.amazonaws.com/,” and the object
key is everything after that. There is no folder structure in Amazon S3.
To have a record of who accessed your Amazon Simple Storage Service (Amazon S3) data and from where, you should do what?
A. Enable versioning on the bucket.
B. Enable website hosting on the bucket.
C. Enable server access logs on the bucket.
D. Create an AWS Identity and Access Management (IAM) bucket policy.
E. Enable Amazon CloudWatch logs.
C. Amazon S3 server access logs store a record of what requestor accessed the objects in
your bucket, including the requesting IP address.
What are some reasons to enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers)
A. You want a backup of your data in case of accidental deletion.
B. You have a set of users or customers who can access the second bucket with lower latency.
C. For compliance reasons, you need to store data in a location at least 300 miles away from the first region.
D. Your data needs at least five nines of durability.
B, C. Cross-region replication can help lower latency and satisfy compliance
requirements on distance. Amazon S3 is designed for eleven nines durability for objects
in a single region, so a second region does not significantly increase durability. Crossregion
replication does not protect against accidental deletion.
Your company requires that all data sent to external storage be encrypted before being sent. Which Amazon Simple Storage Service (Amazon S3) encryption solution will meet this requirement?
A. Server-Side Encryption (SSE) with AWS-managed keys (SSE-S3)
B. SSE with customer-provided keys (SSE-C)
C. Client-side encryption with customer-managed keys
D. Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSEKMS)
C. If data must be encrypted before being sent to Amazon S3, client-side encryption must
be used.
You have a popular web application that accesses data stored in an Amazon Simple Storage Service (Amazon S3) bucket. You expect the access to be very read-intensive, with expected request rates of up to 500 GETs per second from many clients. How can you increase the performance and scalability of Amazon S3 in this case?
A. Turn on cross-region replication to ensure that data is served from multiple locations.
B. Ensure randomness in the namespace by including a hash prefix to key names.
C. Turn on server access logging.
D. Ensure that key names are sequential to enable pre-fetch.
B. Amazon S3 scales automatically, but for request rates over 100 GETS per second, it
helps to make sure there is some randomness in the key space. Replication and logging
will not affect performance or scalability. Using sequential key names could have a
negative effect on performance or scalability.
What is needed before you can enable cross-region replication on an Amazon Simple Storage Service (Amazon S3) bucket? (Choose 2 answers)
A. Enable versioning on the bucket.
B. Enable a lifecycle rule to migrate data to the second region.
C. Enable static website hosting.
D. Create an AWS Identity and Access Management (IAM) policy to allow Amazon S3 to replicate objects on your behalf.
A, D. You must enable versioning before you can enable cross-region replication, and
Amazon S3 must have IAM permissions to perform the replication. Lifecycle rules
migrate data from one storage class to another, not from one bucket to another. Static
website hosting is not a prerequisite for replication.
Your company has 100TB of financial records that need to be stored for seven years by law. Experience has shown that any record more than one-year old is unlikely to be accessed. Which of the following storage plans meets these needs in the most cost efficient manner?
A. Store the data on Amazon Elastic Block Store (Amazon EBS) volumes attached to t2.micro instances.
B. Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year and delete the object after seven years.
C. Store the data in Amazon DynamoDB and run daily script to delete data older than seven years.
D. Store the data in Amazon Elastic MapReduce (Amazon EMR).
B. Amazon S3 is the most cost effective storage on AWS, and lifecycle policies are a
simple and effective feature to address the business requirements.