Chapter 3 - Storage: Amazon Simple Storage Services (S3), EBS, EFS, Storage Gateway, Snoball, FSX, DATASYNC Flashcards
What are some of the key characteristics of Amazon S3? Choose 3.
- Data is stored as objects within resources called “buckets”
- With S3 Versioning, you can easily preserve, retrieve, and restore every version of an object stored in Amazon S3
- With S3 Cross-Region Replication (CRR), you can replicate objects (and their respective metadata and object tags) into other AWS Regions
- S3 can be attached to an EC2 instance to provide block storage
- Data is stored as objects within resources called “buckets”
- With S3 Versioning, you can easily preserve, retrieve, and restore every version of an object stored in Amazon S3
- With S3 Cross-Region Replication (CRR), you can replicate objects (and their respective metadata and object tags) into other AWS Regions
- S3 can be attached to an EC2 instance to provide block storage
What are storage classes provided by S3? Choose 3.
- Standard, Intelligent-Tiering, Standard-Infrequent Access
- One Zone-Infrequent Access (One Zone-IA), Glacier (S3 Glacier)
- Glacier Deep Archive
- Elastic File System and Elastic Block Storage
- Storage Gateway
- Standard, Intelligent-Tiering, Standard-Infrequent Access
- One Zone-Infrequent Access (One Zone-IA), Glacier (S3 Glacier)
- Glacier Deep Archive
- Elastic File System and Elastic Block Storage
- Storage Gateway
Which of the following are use cases for using Amazon Simple Storage Service (Amazon S3)? Choose 4.
- File System - mounted to EC2 instance
- For Backup and Storage
- To Provide application hosting services that deploy, install, and manage web applications
- To build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
- To host software applications that customers can download
- File System - mounted to EC2 instance
- For Backup and Storage
- To Provide application hosting services that deploy, install, and manage web applications
- To build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
- To host software applications that customers can download
A construction engineering company wants to leverage cloud storage to store their large architecture blueprints files which are saved as pdfs in network shared folder through a collaboration application between project team members. The blueprint files for the active projects should be accessible fast while files pertaining to completed projects are not accessed frequently. What is the best cloud storage solution for them which will work with existing application?
- Store latest project files in s3-Standard and store files more than one month old S3-IA. Create a life cycle policy to move files accordingly.
- Install an AWS Storage volume gateway in cached mode.
- Install an AWS Storage volume gateway in stored mode.
- Install AWS Storage File Gateway.
- Store latest project files in s3-Standard and store files more than one month old S3-IA. Create a life cycle policy to move files accordingly.
- Install an AWS Storage volume gateway in cached mode.
- Install an AWS Storage volume gateway in stored mode.
- Install AWS Storage File Gateway.
Which of the following statements are correct for Amazon S3 Data consistency model?
- A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.
- A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data.
- A process deletes an existing object and immediately attempts to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data.
- A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object.
- All of the above
- A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.
- A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data.
- A process deletes an existing object and immediately attempts to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data.
- A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object.
- All of the above
Jason creates a S3 bucket ‘mywestwebsite’ in ‘us-west-1’ region. Which of these are correct url to access this bucket? Choose 3.
- https://amazonaws. s3.us-west-1.com/mywestwebsite
- https://s3.us-west-1.amazonaws.com/mywestwebsite
- https://s3.amazonaws.com/mywestwebsite
- https://mywestwebsite.s3.amazonaws.com
- https://mywestwebsite.s3.us-west-1.amazonaws.com
- https://amazonaws. s3.us-west-1.com/mywestwebsite
- https://s3.us-west-1.amazonaws.com/mywestwebsite
- https://s3.amazonaws.com/mywestwebsite
- https://mywestwebsite.s3.amazonaws.com
- https://mywestwebsite.s3.us-west-1.amazonaws.com
Jason creates a S3 bucket ‘myeastwebsite’ in ‘us-east-1’ region. Which of these are correct url to access this bucket? Choose 4.
- https://amazonaws.s3.us-east-1.com/myeastwebsite
- https://s3.us-east-1.amazonaws.com/myeastwebsite
- https://s3.amazonaws.com/myeastwebsite
- https://myeastwebsite.s3.amazonaws.com
- https://myeastwebsite.s3.us-east-1.amazonaws.com
- https://amazonaws.s3.us-east-1.com/myeastwebsite
- https://s3.us-east-1.amazonaws.com/myeastwebsite
- https://s3.amazonaws.com/myeastwebsite
- https://myeastwebsite.s3.amazonaws.com
- https://myeastwebsite.s3.us-east-1.amazonaws.com
Based on the following Amazon S3 URL of an object, which of the following statement are correct? Choose 2.
https://mywestwebsite.s3.amazonaws.com/photos/whale.jpg
- “whale.jpg” is stored in the folder “photos” inside the bucket “mywestwebsite”.
- The key of the object will be “photos/whale.jpg”
- The key of the object will be “whale.jpg”
- The object “whale.jpg” is stored in the main bucket folder “mywestwebsite”
- “whale.jpg” is stored in the folder “photos” inside the bucket “mywestwebsite”.
- The key of the object will be “photos/whale.jpg”
- The key of the object will be “whale.jpg”
- The object “whale.jpg” is stored in the main bucket folder “mywestwebsite”
John Smith has an object saved in S3 with attribute value “color=violet”. He updates the object with attribute value to “color=red”. He GET the object after 2 seconds and reads the attribute value of color. What will be the value?
- The value will be “ violet”
- The value will be “red”
- The value can be either “ violet” or “red”
- He will get 404 object not found error.
- The value will be “ violet”
- The value will be “red”
- The value can be either “ violet” or “red”
- He will get 404 object not found error.
Agrim uses S3 to store all his personal photos. He has a bucket name “personalgallery” in us-east-1 region. After he came back from a vacation in Alaska, he uploads all camera snaps in his laptop desktop folder “alaskaphotos”. The photos have file name photo1.jpg, photo2.jpg etc. He logs into his AWS account and opens the S3 console. He then drags the desktop folder “alaskaphotos” inside the “personalgallery” bucket to upload files. Which of the following is correct? Choose 2.
- All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside the main bucket folder “personalgallery”
- All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside another folder “alaskaphotos” under the main bucket folder “personalgallery”
- The key name of the photos files will be “photo1.jpg” “photo2.jpg” etc.
- The key name of the photos files will be “/alaskaphotos/photo1.jpg” “/alaskaphotos/photo2.jpg” etc.
- All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside the main bucket folder “personalgallery”
- All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside another folder “alaskaphotos” under the main bucket folder “personalgallery”
- The key name of the photos files will be “photo1.jpg” “photo2.jpg” etc.
- The key name of the photos files will be “/alaskaphotos/photo1.jpg” “/alaskaphotos/photo2.jpg” etc.
John hosts his personal blog website as static website on S3. The bucket name he uses to store his website files is ‘west-bucket’ in ‘us-west-2’ region. The photos are uploaded under the main bucket folder using the S3 console. What is the url of john’s static website?
- A. http:// s3-us-west-2.amazonaws.com/ west-bucket
- B. http://west-bucket.s3-us-west-2.amazonaws.com/
- C. http://west-bucket.s3-website-us-west-2.amazonaws.com/
- D. http:// s3-website-us-west-2.amazonaws.com/west-bucket
- A. http:// s3-us-west-2.amazonaws.com/ west-bucket
- B. http://west-bucket.s3-us-west-2.amazonaws.com/
- C. http://west-bucket.s3-website-us-west-2.amazonaws.com/
- D. http:// s3-website-us-west-2.amazonaws.com/west-bucket
James hosts his personal blog website as static website on S3. The bucket name he uses to store his website files is ‘eu-bucket’ in ‘eu-central-1’ region. The photos are uploaded under the main bucket folder using the S3 console. What will be url of James’ static website?
- A. http:// s3- eu-central-1.amazonaws.com/eu-bucket
- B. http://eu-bucket.s3-website. eu-central-1.amazonaws.com/
- C. http://eu-bucket.s3-website-eu-central-1.amazonaws.com/
- D. http:// s3-website- eu-central-1amazonaws.com/eu-bucket
- A. http:// s3- eu-central-1.amazonaws.com/eu-bucket
- B. http://eu-bucket.s3-website. eu-central-1.amazonaws.com/
- C. http://eu-bucket.s3-website-eu-central-1.amazonaws.com/
- D. http:// s3-website- eu-central-1amazonaws.com/eu-bucket
You are an architect who has been tasked to build a static website using S3. What are the essential prerequisite steps? Choose 2.
- Register a custom domain name in Route 53.
- Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
- Enable HTTP on the bucket.
- Ensure that bucket and its objects must have public read access.
- Register a custom domain name in Route 53.
- Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
- Enable HTTP on the bucket.
- Ensure that bucket and its objects must have public read access.
Which S3 storage class is not designed to be resilient to simultaneous complete data loss in a single Availability Zone and partial loss in another Availability Zone?
- STANDARD_IA
- ONEZONE_IA
- INTELLIGENT_TIERING
- DEEP_ARCHIVE
- STANDARD_IA
- ONEZONE_IA
- INTELLIGENT_TIERING
- DEEP_ARCHIVE
Which S3 storage class are designed for long-lived and infrequently accessed data? Choose 2.
- STANDARD_IA
- ONEZONE_IA
- GLACIER
- DEEP_ARCHIVE
- STANDARD_IA
- ONEZONE_IA
- GLACIER
- DEEP_ARCHIVE
The GLACIER and DEEP_ARCHIVE storage classes offer the same durability and resiliency as the STANDARD storage class.
- True
- False
- True
- False
What are the benefits of AWS Storage Gateway? Choose 3.
- Hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage.
- You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration.
- In premise solution for enhancing your company data center storage capability without connecting to AWS cloud storage.
- Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols, such as NFS, SMB and iSCSI.
- Hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage.
- You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration.
- In premise solution for enhancing your company data center storage capability without connecting to AWS cloud storage.
- Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols, such as NFS, SMB and iSCSI.
What are the three storage interfaces supported by AWS Storage Gateway?
- File Gateway
- Volume Gateway
- Tape Gateway
- Network Gateway
- File Gateway
- Volume Gateway
- Tape Gateway
- Network Gateway
What is the minimum file size that can be store in S3?
- 1 Byte
- 0 Byte
- 1 KB
- 1 MB
- 1 Byte
- 0 Byte
- 1 KB
- 1 MB
What is the largest object size that can be uploaded to S3 in a single PUT?
- 5GB
- 5TB
- 5MB
- 5KB
- 5GB
- 5TB
- 5MB
- 5KB
What is the maximum file size that can be stored on S3?
- 5GB
- 5TB
- 5MB
- 5KB
- 5GB
- 5TB
- 5MB
- 5KB
A law firm has an internal tablet/mobile application used by employees to download large word documents in their devices for offline review. These document’s size are in the range of 10-20 MB. The employees save the document in local device storage, edit it in offline mode and then use the feature in app to upload file to cloud storage. Most of the time users are expected to be in area of high mobile bandwidth of LTE or WIFI but some time they may be in area using a slow speed network (EDGE) or 3G with lots of fluctuations. The files are stored in AWS S3 buckets. What approach should the architect recommend for file upload in application?
- Use Single PUT operation to upload the files to S3
- Use Multipart upload to upload the files to S3
- Use Amazon S3 Transfer Acceleration to upload the files
- Use Single POST operation to upload the files to S3
- Use Single PUT operation to upload the files to S3
- Use Multipart upload to upload the files to S3
- Use Amazon S3 Transfer Acceleration to upload the files
- Use Single POST operation to upload the files to S3
What are the recommended scenarios to use multipart uploading to S3? Choose 2.
- If you’re uploading any size objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
- If you’re uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
- If you’re uploading over a stable high-bandwidth network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
- If you’re uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
- If you’re uploading any size objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
- If you’re uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
- If you’re uploading over a stable high-bandwidth network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
- If you’re uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
What is S3 transfer acceleration? Choose 2.
- Enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket.
- Enables fast, easy, and secure transfers of files over short distances between your client and your Amazon S3 bucket.
- Leverages Amazon CloudFront’s globally distributed AWS Edge Locations.
- Leverages Amazon CloudFront’s regionally distributed AWS Edge Locations.
- Enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket.
- Enables fast, easy, and secure transfers of files over short distances between your client and your Amazon S3 bucket.
- Leverages Amazon CloudFront’s globally distributed AWS Edge Locations.
- Leverages Amazon CloudFront’s regionally distributed AWS Edge Locations.
You have designed an intranet web application for your employees to upload files to S3 buckets for archive. One of employee is trying to upload a 6 GB file to S3 but keep getting the following AWS error message “Your proposed upload exceeds the maximum allowed object size.”. What can be the possible reason?
- Your intranet firewall is not allowing upload of that object size.
- Your browser is not allowing upload of that object size.
- Maximum size of object that can be uploaded to S3 in single PUT operation is 5 GB.
- The S3 bucket cannot store object of that size.
- Your intranet firewall is not allowing upload of that object size.
- Your browser is not allowing upload of that object size.
- Maximum size of object that can be uploaded to S3 in single PUT operation is 5 GB.
- The S3 bucket cannot store object of that size.
In general, at what object size AWS recommends using multipart uploads instead of uploading the object in a single operation?
- 5 MB
- 50 MB
- 100 MB
- 5 GB
- 5 MB
- 50 MB
- 100 MB
- 5 GB
What are the reasons to use S3 Transfer acceleration? Choose 2.
- Applications that upload to a centralized bucket from all over the world.
- Transfer gigabytes to terabytes of data on a regular basis across continents.
- To improve application performance
- To improve snapshot copy of EC2 EBS volume.
- Applications that upload to a centralized bucket from all over the world.
- Transfer gigabytes to terabytes of data on a regular basis across continents.
- To improve application performance
- To improve snapshot copy of EC2 EBS volume.
Amazon EBS provide which type of storage?
- Block based Storage
- Object based Storage
- Magnetic Storage
- File Storage
- Block based Storage
- Object based Storage
- Magnetic Storage
- File Storage
Your company is planning to store their important documents in S3 storage. The compliance unit wants to be intimated when documents are created or deleted along with the user name. You know that S3 has the feature of event notification for object events like s3: ObjectCreated:*, s3: ObjectRemoved:*. What are the destination where S3 can publish events? Choose3.
- Amazon SES
- Amazon Simple Notification Service (Amazon SNS) topic
- Amazon Simple Queue Service (Amazon SQS) queue
- AWS Lambda
- Amazon SES
- Amazon Simple Notification Service (Amazon SNS) topic
- Amazon Simple Queue Service (Amazon SQS) queue
- AWS Lambda
You want to host your own cloud blog website with custom domain as “www. Mycloudblog.com” as static website using S3. What are the essential prerequisite steps? Choose 4.
- Register the custom domain name in Route 53. Create the alias records that you add to the hosted zone for your domain name.
- Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
- The bucket names must match the names of the website that you are hosting.
- Enable HTTP on the bucket
- Ensure that bucket and its objects must have public read access
- Register the custom domain name in Route 53. Create the alias records that you add to the hosted zone for your domain name.
- Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
- The bucket names must match the names of the website that you are hosting.
- Enable HTTP on the bucket
- Ensure that bucket and its objects must have public read access
What are the benefit of using versioning in S3? Choose 2.
- To restrict access to bucket.
- To preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket.
- To encrypt objects stored in the bucket.
- To recover from both unintended user actions and application failures.
- To restrict access to bucket.
- To preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket.
- To encrypt objects stored in the bucket.
- To recover from both unintended user actions and application failures.
What is the version id of the stored objects before the version is enabled on the bucket?
- 111111
- 222222
- 999999
- Null
- 111111
- 222222
- 999999
- Null
How does versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite? Choose 2.
- If you delete or overwrite an object AWS keeps a copy in the archive folder.
- If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.
- If you overwrite an object, it results in a new object version in the bucket.
- If you delete or overwrite an object AWS emails you a copy of the previous version.
- If you delete or overwrite an object AWS keeps a copy in the archive folder.
- If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.
- If you overwrite an object, it results in a new object version in the bucket.
- If you delete or overwrite an object AWS emails you a copy of the previous version.
Choose the statements that are true? Choose 3.
- Buckets can be in one of three states: unversioned (the default), versioning-enabled, or versioning-suspended.
- Buckets can be in one of two states: unversioned (the default) or versioning-enabled.
- Once you version-enable a bucket, it can never return to an unversioned state.
- Once you version-enable a bucket, it can return to an unversioned state.
- Once you version-enable a bucket, you can only suspend versioning on that bucket.
- Buckets can be in one of three states: unversioned (the default), versioning-enabled, or versioning-suspended.
- Buckets can be in one of two states: unversioned (the default) or versioning-enabled.
- Once you version-enable a bucket, it can never return to an unversioned state.
- Once you version-enable a bucket, it can return to an unversioned state.
- Once you version-enable a bucket, you can only suspend versioning on that bucket.
Your company stores customer contract documents in S3. One of the Account Manager deleted the signed contracts of his accounts. As a result of this you have been asked to configure S3 storage in such a way that files can be protected against inadvertent or intentional deletion. How will you configure the S3? Choose 2.
- Enable Versioning on the bucket.
- Write a lambda program which copies the file in another backup bucket.
- Enable MFA delete on the bucket.
- Use lifecycle policy which copies the data after POST/UPDATE into another bucket.
- Use cross region replication which copies the data after POST/UPDATE into another bucket.
- Enable Versioning on the bucket.
- Write a lambda program which copies the file in another backup bucket.
- Enable MFA delete on the bucket.
- Use lifecycle policy which copies the data after POST/UPDATE into another bucket.
- Use cross region replication which copies the data after POST/UPDATE into another bucket.
What is S3 cross region replication?
- Enables automatic, synchronous copying of objects across buckets in same AWS Regions
- Enables automatic, synchronous copying of objects across buckets in different AWS Regions
- Enables automatic, asynchronous copying of objects across buckets in different AWS Regions
- Enables automatic, asynchronous copying of objects across buckets in same AWS Regions
- Enables automatic, synchronous copying of objects across buckets in same AWS Regions
- Enables automatic, synchronous copying of objects across buckets in different AWS Regions
- Enables automatic, asynchronous copying of objects across buckets in different AWS Regions
- Enables automatic, asynchronous copying of objects across buckets in same AWS Regions
What are the reasons to enable cross region replication on your S3 buckets?
- Comply with compliance requirements
- Minimize latency
- Increase operational efficiency
- Maintain object copies under different ownership
- All of the above
- Comply with compliance requirements
- Minimize latency
- Increase operational efficiency
- Maintain object copies under different ownership
- All of the above
What are pre-requisite to enable Cross Region Replication? Choose 4.
- The source and destination bucket owner must have their respective source and destination AWS Regions enabled for their account.
- Both source and destination buckets must be in different region having their versioning enabled.
- Amazon S3 must have permissions to replicate objects from the source bucket to the destination bucket.
- If the owner of the source bucket doesn’t own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL).
- Both source and destination buckets must be in same region having their versioning enabled
- Amazon S3 needs to have only one permission to read objects in the source bucket.
- The source and destination bucket owner must have their respective source and destination AWS Regions enabled for their account.
- Both source and destination buckets must be in different region having their versioning enabled.
- Amazon S3 must have permissions to replicate objects from the source bucket to the destination bucket.
- If the owner of the source bucket doesn’t own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL).
- Both source and destination buckets must be in same region having their versioning enabled
- Amazon S3 needs to have only one permission to read objects in the source bucket.
What is S3 Object expiration?
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it synchronously
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it asynchronously
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to DEEP_ARCHIVE
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to GLACIER
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it synchronously
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it asynchronously
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to DEEP_ARCHIVE
- When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to GLACIER
What isn’t replicated by default when you enable Cross Region Replication on your S3 bucket? Choose 3.
- Objects with file type .doc, .pdf, png
- Objects that existed before you added the replication configuration to the bucket.
- Objects in the source bucket that the bucket owner doesn’t have permissions for.
- Objects created with server-side encryption using customer-provided (SSE-C) encryption keys.
- Objects encrypted using Amazon S3 managed keys (SSE-S3)
- Objects with file type .doc, .pdf, png
- Objects that existed before you added the replication configuration to the bucket.
- Objects in the source bucket that the bucket owner doesn’t have permissions for.
- Objects created with server-side encryption using customer-provided (SSE-C) encryption keys.
- Objects encrypted using Amazon S3 managed keys (SSE-S3)
Suppose that you are a solution architect of a global company having regional headquarters in US-East, Ireland and Sydney. You have configured cross-region replication where bucket ‘myuseastbucket’ in ‘us-east-1’ US East (N. Virginia) region is the source and bucket ‘myeuwestbucket’ in ‘eu-west-1’ EU (Ireland) is the destination. Now you added another cross-region replication configuration where bucket ‘myeuwestbucket’ is the source and bucket ‘mysoutheastbucket’ in Asia Pacific (Sydney) ‘ap-southeast-2’ is the destination. You notice that file created in ‘myuseastbucket’ is getting replicated in ‘myeuwestbucket’ but not in ‘mysoutheastbucket’ ? What is the possible reason?
- You have not configured cross region replication for ‘myuseastbucket’ to mysoutheastbucket’
- Daisy Chain replication is not supported by S3.
- You have not called S3 support and get cross region replication to more than two destination.
- You have not given S3 permission to replicated objects to ‘mysoutheastbucket’
- You have not configured cross region replication for ‘myuseastbucket’ to mysoutheastbucket’
- Daisy Chain replication is not supported by S3.
- You have not called S3 support and get cross region replication to more than two destination.
- You have not given S3 permission to replicated objects to ‘mysoutheastbucket’
What are the actions that can be configured in the S3 object lifecycle? Choose 2.
- Define when objects transition to another storage class.
- Define when objects expire.
- Define when object versioning is to be started.
- Define when object cross region replication is to be started.
- Define when objects transition to another storage class.
- Define when objects expire.
- Define when object versioning is to be started.
- Define when object cross region replication is to be started.
Which statements on Amazon S3 pricing are true? Choose 3.
- If you create a lifecycle expiration rule that causes objects that have been in INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA storage for less than 30 days to expire, you are charged for 30 days
- You are always only charged for number of days objects are in the INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE
- If you create a lifecycle expiration rule that causes objects that have been in GLACIER storage for less than 90 days to expire, you are charged for 90 days.
- If you create a lifecycle expiration rule that causes objects that have been in DEEP_ARCHIVE storage for less than 180 days to expire, you are charged for 180 days.
- If you create a lifecycle expiration rule that causes objects that have been in INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA storage for less than 30 days to expire, you are charged for 30 days
- You are always only charged for number of days objects are in the INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE
- If you create a lifecycle expiration rule that causes objects that have been in GLACIER storage for less than 90 days to expire, you are charged for 90 days.
- If you create a lifecycle expiration rule that causes objects that have been in DEEP_ARCHIVE storage for less than 180 days to expire, you are charged for 180 days.
Which of the following lifecycle transitions between storage classes are supported? Choose 2.
- You can only transition from STANDARD to STANDARD_IA or ONEZONE_IA
- You can transition from the STANDARD storage class to any other storage class.
- You can only transition from STANDARD to the GLACIER or DEEP_ARCHIVE storage classes.
- You can transition from any storage class to the GLACIER or DEEP_ARCHIVE storage classes.
- You can only transition from STANDARD to STANDARD_IA or ONEZONE_IA
- You can transition from the STANDARD storage class to any other storage class.
- You can only transition from STANDARD to the GLACIER or DEEP_ARCHIVE storage classes.
- You can transition from any storage class to the GLACIER or DEEP_ARCHIVE storage classes.
Which lifecycle transitions between storage classes are supported? Choose 3.
- You can transition from the STANDARD_IA storage class to the INTELLIGENT_TIERING or ONEZONE_IA storage classes.
- You can transition from any storage class to the STANDARD storage class.
- You can transition from the INTELLIGENT_TIERING storage class to the ONEZONE_IA storage class.
- You can transition from the DEEP_ARCHIVE storage class to any other storage class.
- You can transition from the GLACIER storage class to the DEEP_ARCHIVE storage class.
- You can transition from the STANDARD_IA storage class to the INTELLIGENT_TIERING or ONEZONE_IA storage classes.
- You can transition from any storage class to the STANDARD storage class.
- You can transition from the INTELLIGENT_TIERING storage class to the ONEZONE_IA storage class.
- You can transition from the DEEP_ARCHIVE storage class to any other storage class.
- You can transition from the GLACIER storage class to the DEEP_ARCHIVE storage class.
Which lifecycle transitions between storage classes are not supported? Choose 4.
- Transition from any storage class to the STANDARD storage class.
- Transition from the STANDARD storage class to any other storage class.
- Transition from the INTELLIGENT_TIERING storage class to the STANDARD_IA storage class.
- Transition from the ONEZONE_IA storage class to the STANDARD_IA or INTELLIGENT_TIERING storage classes.
- Transition from the DEEP_ARCHIVE storage class to any other storage class.
- Transition from any storage class to the STANDARD storage class.
- Transition from the STANDARD storage class to any other storage class.
- Transition from the INTELLIGENT_TIERING storage class to the STANDARD_IA storage class.
- Transition from the ONEZONE_IA storage class to the STANDARD_IA or INTELLIGENT_TIERING storage classes.
- Transition from the DEEP_ARCHIVE storage class to any other storage class.
A manufacturing company has been using in-premise servers for storage. They have nearly used their installed storage capacity but don’t want to spend on adding new storage capacity to in-premise. They want to leverage AWS but don’t want to migrate their whole current in-premise data to cloud. Which AWS service can they use to achieve their requirement?
- Amazon S3
- Amazon EBS
- Amazon Storage Gateway
- Amazon RDS
- Amazon S3
- Amazon EBS
- Amazon Storage Gateway
- Amazon RDS
Which option of AWS Storage Gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers?
- File Gateway
- Volume Gateway
- Tape Gateway
- iSCSI Gateway
- File Gateway
- Volume Gateway
- Tape Gateway
- iSCSI Gateway
You are creating a bucket with name ‘mybucket’ but you get an error message that ‘bucket name already exist’. You don’t have a bucket with same name nor you created a bucket with similar name earlier, what is the reason you are getting this error?
- S3 doesn’t allow you to create a bucket with name ‘mybucket’, it is reserved.
- You cannot have substring ‘bucket’ in you bucket name.
- Bucket names must be unique across all existing bucket names in Amazon S3.
- ‘mybucket’ is not a DNS-compliant bucket name
- S3 doesn’t allow you to create a bucket with name ‘mybucket’, it is reserved.
- You cannot have substring ‘bucket’ in you bucket name.
- Bucket names must be unique across all existing bucket names in Amazon S3.
- ‘mybucket’ is not a DNS-compliant bucket name
Which option of AWS Storage Gateway provides you feature to store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB)?
- File Gateway
- Volume Gateway
- Tape Gateway
- iSCSI Gateway
- File Gateway
- Volume Gateway
- Tape Gateway
- iSCSI Gateway
Which of the following statements are correct about Volume Gateway? Choose 2.
- Cached volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
- Stored volumes provides low-latency access to your entire dataset by storing all your data locally.
- Stored volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
- Cached volumes provides low-latency access to your entire dataset by storing all your data locally.
- Cached volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
- Stored volumes provides low-latency access to your entire dataset by storing all your data locally.
- Stored volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
- Cached volumes provides low-latency access to your entire dataset by storing all your data locally.
What are the advantages provided by multipart upload? Choose 4.
- Ability to upload parts in parallel to improve throughput.
- Ability to begin an upload before knowing the object size.
- Ability to pause and resume the upload.
- Quick recovery from network issues.
- Ability to upload 10 MB to 5 GB, last part can be < 10 MB.
- Ability to upload parts in parallel to improve throughput.
- Ability to begin an upload before knowing the object size.
- Ability to pause and resume the upload.
- Quick recovery from network issues.
- Ability to upload 10 MB to 5 GB, last part can be < 10 MB.
What are the hosting options for AWS Storage Gateway? Choose 3.
- On-premises as a VM appliance
- Hardware appliance
- In AWS Elastic Beanstalk
- In AWS as an Amazon EC2 instance
- On-premises as a VM appliance
- Hardware appliance
- In AWS Elastic Beanstalk
- In AWS as an Amazon EC2 instance
Which of the following options are correct for File Storage Gateway? Choose 3.
- File gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols.
- With file gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares to your existing file-based applications or devices.
- With file gateway, your configured S3 buckets will be available as iSCSI shares to your existing file-based applications or devices.
- The gateway translates these file operations into object requests on your S3 buckets. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway.
- File gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols.
- With file gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares to your existing file-based applications or devices.
- With file gateway, your configured S3 buckets will be available as iSCSI shares to your existing file-based applications or devices.
- The gateway translates these file operations into object requests on your S3 buckets. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway.
Which S3 storage class is suitable for performance-sensitive use cases (those that require millisecond access time) and frequently accessed data?
- INTELLIGENT_TIERING
- STANDARD
- STANDARD-IA
- ONEZONE_IA
- INTELLIGENT_TIERING
- STANDARD
- STANDARD-IA
- ONEZONE_IA
Your company is adopting AWS and want to minimize on-premises storage footprint, but need to retain on-premises access to storage for their existing apps. You would like to leverage AWS services as a way to replace on-premises storage with cloud-backed storage, which allows existing applications to operate without changes, while still getting the benefits of storing and processing this data in AWS. Which AWS service will be appropriate?
- Amazon S3
- Amazon RDS
- Amazon EBS
- Amazon Storage Gateway
- Amazon S3
- Amazon RDS
- Amazon EBS
- Amazon Storage Gateway
Which of the following are applicable use cases for AWS Storage Gateway? Choose 3.
- Increase performance and reduce latency of on premise storage.
- Moving on-premises backups to AWS.
- Replace on-premises storage with cloud-backed storage, while allowing their existing applications to operate without changes, while still getting the benefits of storing and processing this data in AWS.
- Run apps in AWS and make the results available from multiple on-premises locations such as data centers or branch and remote offices. Also, customers that have moved their on-prem archives to AWS often want to make this data available for access from existing on-premises applications.
- Increase performance and reduce latency of on premise storage.
- Moving on-premises backups to AWS.
- Replace on-premises storage with cloud-backed storage, while allowing their existing applications to operate without changes, while still getting the benefits of storing and processing this data in AWS.
- Run apps in AWS and make the results available from multiple on-premises locations such as data centers or branch and remote offices. Also, customers that have moved their on-prem archives to AWS often want to make this data available for access from existing on-premises applications.
Which of the following server side encryption methods are supported in S3?
- Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
- Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
- Server-Side Encryption with Customer-Provided Keys (SSE-C)
- All of the above
- Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
- Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
- Server-Side Encryption with Customer-Provided Keys (SSE-C)
- All of the above
What is a Tape Gateway? Choose 3.
- Cloud based Virtual Tape Library.
- Cloud based File and Object Library.
- Provides virtual tape library (VTL) interface for existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway.
- After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as iSCSI devices.
- After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as File share.
- Cloud based Virtual Tape Library.
- Cloud based File and Object Library.
- Provides virtual tape library (VTL) interface for existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway.
- After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as iSCSI devices.
- After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as File share.
Which of the following two statements are correct for appropriate use for STANDARD_IA and ONEZONE_IA?
- ONEZONE_IA —Use for your primary or only copy of data that can’t be recreated.
- STANDARD_IA—Use for your primary or only copy of data that can’t be recreated.
- ONEZONE_IA—Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
- STANDARD_IA —Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
- ONEZONE_IA —Use for your primary or only copy of data that can’t be recreated.
- STANDARD_IA—Use for your primary or only copy of data that can’t be recreated.
- ONEZONE_IA—Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
- STANDARD_IA —Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
If you encrypt a bucket on S3, what type of encryption does AWS use?
- 1028-bit Advanced Encryption Standard (AES-1028)
- 256-bit Advanced Encryption Standard (AES-256)
- 128-bit Advanced Encryption Standard (AES-128)
- 192-bit Advanced Encryption Standard (AES-192)
- 1028-bit Advanced Encryption Standard (AES-1028)
- 256-bit Advanced Encryption Standard (AES-256)
- 128-bit Advanced Encryption Standard (AES-128)
- 192-bit Advanced Encryption Standard (AES-192)
Your company is exploring AWS Storage Gateway for extending their on-premise storage. One of the key criteria is to have AWS as the primary storage but still there should be fast and low latency access to frequently accessed data. Which Storage Gateway option will meet this criteria? Choose 2.
- Tape Gateway
- File Gateway
- Volume Stored Gateway
- Volume Cached Gateway
- Tape Gateway
- File Gateway
- Volume Stored Gateway
- Volume Cached Gateway
How can you protect data in transit to S3?
- Using an AWS KMS–Managed Customer Master Key (CMK)
- Using a Client-Side Master Key
- Using SSL between client and S3
- All of the above
- Using an AWS KMS–Managed Customer Master Key (CMK)
- Using a Client-Side Master Key
- Using SSL between client and S3
- All of the above
You have an object saved in S3 with attribute “color=yellow”. There are two applications ‘Client1’ and ‘Client 2’ which update the value of attribute to ‘Red’ and ‘Ruby’ one after another as shown in the figure below. Client 1 does a read operation ‘R1’ after the write W2 from client 2 and Client 2 does a read operation after the R1 as shown in the timeline in the figure below. What will be value of color for R1 read? Choose 3.
- For R1 the value of Color = Red
- For R1 the value of Color = Ruby
- For R1 the value of Color = Yellow
- For R1 the value of Color = Null
- For R1 the value of Color = Red
- For R1 the value of Color = Ruby
- For R1 the value of Color = Yellow
- For R1 the value of Color = Null
How can you protect data at rest in S3? Choose 2.
- Using server side encryption
- Using client side encryption
- Using SSL between client and S3
- Using server side encryption
- Using client side encryption
- Using SSL between client and S3
What is Amazon S3 Block Public Access feature? Choose 3.
- With S3 Block Public Access, account administrators and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources that are enforced regardless of how the resources are created.
- You can enable block public access settings only for access points, buckets, and AWS accounts.
- Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates setting.
- You can enable block public access settings only for objects, buckets, and AWS accounts.
- With S3 Block Public Access, account administrators and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources that are enforced regardless of how the resources are created.
- You can enable block public access settings only for access points, buckets, and AWS accounts.
- Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates setting.
- You can enable block public access settings only for objects, buckets, and AWS accounts.
You have a S3 bucket named Photos with versioning enabled. You do following steps:
- PUT a new object photo.gif which gets version ID = 111111
- PUT a new version of photo.gif.
- DELETE photo.gif
Which of the following two statements are correct?
- After Step2, Amazon S3 generates a new version ID (121212), and adds the newer version to the bucket retaining the older version with ID=111111.There is two versions of photo.gif.
- After Step2, Amazon S3 overwrites the older version with ID=111111 and grants it new ID. There is only one version of photo.gif.
- After Step 3, when you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete marker.
- After Step 3, when you DELETE an object, all versions are deleted from the bucket.
- After Step2, Amazon S3 generates a new version ID (121212), and adds the newer version to the bucket retaining the older version with ID=111111.There is two versions of photo.gif.
- After Step2, Amazon S3 overwrites the older version with ID=111111 and grants it new ID. There is only one version of photo.gif.
- After Step 3, when you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete marker.
- After Step 3, when you DELETE an object, all versions are deleted from the bucket.
As a solution architect you want to ensure that Amazon Simple Storage Service (Amazon S3) buckets and objects are secure. The resources that needs to be private must be private. What are the ways to limit permission to Amazon S3 resources? Choose 4.
- Writing AWS Identity and Access Management (IAM) user policies that specify the users that can access specific buckets and objects.
- Writing bucket policies that define access to specific buckets and objects.
- Using Client side encryption
- Using Amazon S3 Block Public Access as a centralized way to limit public access.
- Using server side encryption
- Setting access control lists (ACLs) on your buckets and objects.
- Writing AWS Identity and Access Management (IAM) user policies that specify the users that can access specific buckets and objects.
- Writing bucket policies that define access to specific buckets and objects.
- Using Client side encryption
- Using Amazon S3 Block Public Access as a centralized way to limit public access.
- Using server side encryption
- Setting access control lists (ACLs) on your buckets and objects.
When should you use an ACL-based Access Policy (Bucket and Object ACLs)? Choose 4.
- When an object ACL is the only way to manage access to objects not owned by the bucket owner.
- When permissions vary by object and you need to manage permissions at the object level.
- When you want to define permission at object level.
- To grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket.
- When the AWS account that owns the object also owns the bucket and you need to manage object permissions.
- When an object ACL is the only way to manage access to objects not owned by the bucket owner.
- When permissions vary by object and you need to manage permissions at the object level.
- When you want to define permission at object level.
- To grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket.
- When the AWS account that owns the object also owns the bucket and you need to manage object permissions.
You have a S3 bucket named Photos with versioning enabled. You do following steps:
- PUT a new object photo.gif which gets version ID = 111111
- PUT a new version of photo.gif get version ID=222222
- DELETE photo.gif. Delete marker with version ID = 456789
- GET object
Which of the following two statements are correct?
- GET object will return object with version ID = 111111
- GET object will return object with version ID = 222222
- GET Object returns a 404 not found error.
- GET object will return delete marker object with version ID = 456789
- GET object will return object with version ID = 111111
- GET object will return object with version ID = 222222
- GET Object returns a 404 not found error.
- GET object will return delete marker object with version ID = 456789
By default all S3 buckets are public and can be accessed only by users that are explicitly granted access.
- True
- False
- True
- False
You have a S3 bucket named Photos with versioning enabled. You do following steps:
- PUT a new object photo.gif which gets version ID = 111111
- PUT a new version of photo.gif get version ID=222222
- DELETE photo.gif. Delete marker with version ID = 456789
Which of the following two statements are correct?
- You can GET a specific object version.
- You can permanently delete a specific object by specifying the version you want to delete.
- You can permanently delete only latest version of object. version ID=222222
- You can GET only latest version of object. version ID=222222
- You can GET a specific object version.
- You can permanently delete a specific object by specifying the version you want to delete.
- You can permanently delete only latest version of object. version ID=222222
- You can GET only latest version of object. version ID=222222
In which of the following use case you will not use S3 lifecycle configurations?
- If you upload periodic logs to a bucket, your application might need them for a week or a month. After that, you might want to delete them.
- Some documents are frequently accessed for a limited period of time. After that, they are infrequently accessed. At some point, you might not need real-time access to them, but your organization or regulations might require you to archive them for a specific period. After that, you can delete them.
- Daily upload of data from regional offices to a central bucket for ETL processing.
- You might upload some types of data to Amazon S3 primarily for archival purposes, long-term database backups, and data that must be retained for regulatory compliance.
- If you upload periodic logs to a bucket, your application might need them for a week or a month. After that, you might want to delete them.
- Some documents are frequently accessed for a limited period of time. After that, they are infrequently accessed. At some point, you might not need real-time access to them, but your organization or regulations might require you to archive them for a specific period. After that, you can delete them.
- Daily upload of data from regional offices to a central bucket for ETL processing.
- You might upload some types of data to Amazon S3 primarily for archival purposes, long-term database backups, and data that must be retained for regulatory compliance.
When should you choose IAM policies for S3 permissions? Choose 3.
- You prefer to keep access control policies in the IAM environment not only for S3 but for other AWS resources
- If you’re more interested in “What can this user do in AWS?”
- If you’re more interested in “Who can access this S3 bucket?”
- You have numerous S3 buckets each with different permissions requirements.
- You prefer to keep access control policies in the IAM environment not only for S3 but for other AWS resources
- If you’re more interested in “What can this user do in AWS?”
- If you’re more interested in “Who can access this S3 bucket?”
- You have numerous S3 buckets each with different permissions requirements.
In order to determine whether the requester has permission to perform the specific operation, put in order following steps which Amazon S3 does when it receives a request?
- Converts all the relevant access policies (user policy, bucket policy, ACLs) at run time into a set of policies for evaluation.
- Object context – If the request is for an object, Amazon S3 evaluates the subset of policies owned by the object owner.
- User context – Amazon S3 evaluates a subset of policies owned by the parent account.
- Bucket context – In the bucket context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket.
- 1,2,3,4
- 2,3,4,1
- 3,4,1,2
- 1,3,4,2
- 1,2,3,4
- 2,3,4,1
- 3,4,1,2
- 1,3,4,2
How S3 evaluates request for a bucket operation requested by an IAM Principal whose parent AWS account is also the bucket owner? Principal: Jill
Jill’s Parent Account: 1111-1111-1111
In the user context Amazon S3 evaluates all policies that belongs to the parent AWS account to determine if Principal has permission to perform the operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
- Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
- Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
When should you choose Bucket policy for S3 permissions? Choose 4.
- If you’re more interested in “What can this user do in AWS?”
- If you’re more interested in “Who can access this S3 bucket?”
- You want a simple way to grant cross-account access to your S3 environment, without using IAM roles.
- Your IAM policies are reaching the size limit (up to 2 kb for users, 5 kb for groups, and 10 kb for roles). S3 supports bucket policies of up 20 kb.
- You prefer to keep access control policies in the S3 environment.
- If you’re more interested in “What can this user do in AWS?”
- If you’re more interested in “Who can access this S3 bucket?”
- You want a simple way to grant cross-account access to your S3 environment, without using IAM roles.
- Your IAM policies are reaching the size limit (up to 2 kb for users, 5 kb for groups, and 10 kb for roles). S3 supports bucket policies of up 20 kb.
- You prefer to keep access control policies in the S3 environment.
How S3 evaluates request for a bucket operation requested by an IAM Principal whose parent AWS account is not the bucket owner?
- Principal: Jill
- Jill’s Parent Account: 1111-1111-1111
- Bucket Owner: 2222-2222-2222
- In the user context Amazon S3 evaluates all policies that belong to the parent AWS account to determine if Principal has permission to perform the operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
- Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
- In the user context Amazon S3 evaluates all policies that belong to the parent AWS account to determine if Principal has permission to perform the operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
- Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
- Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
Which of the following statements are correct about S3 ACLs? Choose 4.
- Resource-based access policy options that you can use to manage access to your buckets and objects.
- You can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account.
- A grantee can be an AWS account or IAM user.
- You cannot grant conditional permissions, nor can you explicitly deny permissions.
- A grantee can be an AWS account or one of the predefined Amazon S3 groups.
- Resource-based access policy options that you can use to manage access to your buckets and objects.
- You can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account.
- A grantee can be an AWS account or IAM user.
- You cannot grant conditional permissions, nor can you explicitly deny permissions.
- A grantee can be an AWS account or one of the predefined Amazon S3 groups.
A building architecture company stores all its project architecture documents in S3. As an added security measure they want to allow access to S3 only from their corporate network ip addresses. How this can be achieved?
- Create bucket policies with Action=Allow and the condition block element IpAddress having values for the corporate domain ip address.
- Create IAM policy with Action=Deny the condition block element if NotIpAddress having values for the corporate domain ip address.
- Create IAM policy with Action=Allow the condition block element if IpAddress having values for the corporate domain ip address.
- All of the above
- Create bucket policies with Action=Allow and the condition block element IpAddress having values for the corporate domain ip address.
- Create IAM policy with Action=Deny the condition block element if NotIpAddress having values for the corporate domain ip address.
- Create IAM policy with Action=Allow the condition block element if IpAddress having values for the corporate domain ip address.
- All of the above
A pharmaceutical company has an on-premise analytics application which has 100 GB of data. They don’t want to invest in extending the on-premise storage but want to leverage the AWS cloud storage without making considerable changes to the analytics application. They also want to have low latency access to data of last one month which is more frequently used and should be stored in-premise. Which storage option will you use?
- Amazon RDS
- Amazon Volume Storage Gateway Cached
- Amazon EBS
- Amazon S3
- Amazon Volume Storage Gateway Stored
- Amazon RDS
- Amazon Volume Storage Gateway Cached
- Amazon EBS
- Amazon S3
- Amazon Volume Storage Gateway Stored
You are using S3 bucket for data backup of on-premise data. You have created a lifecycle policy to Transition the data from Standard storage class to Standard IA 3 days after data is created in S3 bucket. If you have uploaded a file to backup S3 folder on 1/15/2020 10.30 AM UTC when will S3 transition it to Standard IA storage class?
- 1/18/2020 10.30 AM UTC
- 1/18/2020 10.30 PM UTC
- 1/19/2020 00:00 UTC
- 1/18/2020 00:00 UTC
- 1/18/2020 10.30 AM UTC
- 1/18/2020 10.30 PM UTC
- 1/19/2020 00:00 UTC
- 1/18/2020 00:00 UTC
You are a solution architect having your own website on wildlife videography. You have uploaded videos from your recent visit to Brazil’s amazon forest on the website. In the backend you store these videos in a S3 folder which is not publically accessible. You want to ensure that these videos can be downloaded only by registered users of your website. How can you do this?
- Make the S3 folder publically accessible
- Attach a bucket policy to the folder so that it is accessible by the registered users
- Generate a pre-signed URL to grant time-limited permission to download the video file
- Create IAM users for the users registered in the website and give access to S3 bucket
- Make the S3 folder publically accessible
- Attach a bucket policy to the folder so that it is accessible by the registered users
- Generate a pre-signed URL to grant time-limited permission to download the video file
- Create IAM users for the users registered in the website and give access to S3 bucket
Your company has decided to start their journey to cloud by moving secondary workloads, such as backups and archives. They want to migrate back up on-premises data currently being stored on physical tapes without changing their current backup workflows or backup applications. As a cloud migration consultant what is the strategy you will adopt?
- Use AWS Tape Gateway.
- Use a third party software to convert data in tape to a block storage for storing in in premise EFS.
- Use a third party software to convert data in tape to an object storage for uploading to S3.
- Use AWS File Gateway.
- Use AWS Volume Cached Gateway
- Use AWS Tape Gateway.
- Use a third party software to convert data in tape to a block storage for storing in in premise EFS.
- Use a third party software to convert data in tape to an object storage for uploading to S3.
- Use AWS File Gateway.
- Use AWS Volume Cached Gateway
You are the solution architect for a pharmaceutical company which has been using a client application to manage their on-premise data backup and archival. The application uses iSCSI protocol to transfer data between application and on-premise storage. The on-premise storage currently store TBs of data and is reaching near capacity. The company doesn’t want to invest in expanding the on-premise storage capacity. Which AWS service company should leverage so that there is minimum or no change to existing backup & archiving application as well as low latency is provided for frequently used data?
- Use AWS Tape Gateway.
- Use AWS Volume Storage Gateway.
- Use AWS File Gateway.
- Use AWS Volume Cached Gateway
- Use AWS Tape Gateway.
- Use AWS Volume Storage Gateway.
- Use AWS File Gateway.
- Use AWS Volume Cached Gateway