Storage Flashcards

1
Q

Can elastic block store (EBS) be used on multiple EC2 instances?

A

Yes, but there is some restrictions. Only io1 and io2 (high performing drives) can be attached to the multiple EC2 instances. EBS volumes cannot be shared across availability zones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can EBS volumes data persist independently of the life of the instance?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Does EBS volume needs to be attached to an instance?

A

No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Can we attach multiple EBS volumes to an instance?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Can we attach EBS volume to multiple instances?

A

Yes, multi-attach can be used to attach a volume to multiple instances, but there are some constraint.The instances must be in same AZ, and EBS volume should be of type io1 and io2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

does EBS root volume gets deleted when instance is terminated?

A

Yes, by default, it gets deleted. But this can be changed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we move a EBS volume from one AZ to another?

A

Take a snapshot of the EBS volume and restore it in another AZ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Are the EBS volume snapshots incremental in nature?

A

Yes. but each snapshot has the capability to restore the entire volume if the earlier shots are not available. It can come up in the exam like how to free up space in S3 where you have taken EBS volume snapshots. The answer is to delete all the snapshot but the last one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is instance store volume?

A

Instance store volumes are high-performance local disks that are physically attached to the host computer on which an EC2 instance runs. Instances stores are ideal for the temporary storage of information that changes frequently, such as buffers, caches or scratch data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Can data persist on instance store volumes?

A

Instance stores are ephemeral which means that the data is lost when powered off (non-persistent)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can instance store be detached or attached?

A

No. instance store is physically attached to the host computer on which an EC2 instance runs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is an AMI?

A

AMI is an Amazon machine image that provides information required to launch an instance. it may include the following:1. one or more EBS snapshots or, for instance, store-backed AMIs, a template for the root volume of the instance [for example and operating system, and application server and applications]2. Launch permissions that control AWS accounts can use the am I to launch instances3. A block device mapping that specifies the volumes too attached to the instance when it’s launched

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Can EFS be shared across multiple EC2 instances?

A

Yes, they can be shared across multiple instances in different availability zones. the instances can be in another VPC and connected by peering. EFS file systems can be connected to the computers in the corporate data center via VPN or direct connect. The instances can be in the different regions connected via peering connection. Note that EFS is supported for Linux instances only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to encrypt an unencrypted EFS volume?

A

Create a new encrypted EFS volume and copy the data to it. EFS volumes are encrypted at the time of creation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Is there a life cycle managment policy for EFS Storage?

A

Yes, Amazon EFS lifecycle management automatically manages cost-effective file storage for your file systems. When enabled, lifecycle management migrates files that have not been accessed for a set period of time to the EFS Standard–Infrequent Access (Standard-IA) or One Zone–Infrequent Access (One Zone-IA) storage class, depending on your file system. You define that period of time by using the Transition into IA lifecycle policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How many type of FSx file systems are there?

A

NetApp ONTAP, OpenZFS, Windows File Server, and Lustre for Linux.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the features of FSx for Windows File Server?

A
  • Provides a fully managed native Microsoft Windows file system* Full support for the SMB protocol, Windows NTFS, and Microsoft Active Directory (AD) integration* Supports Windows-native file system features:• Access Control Lists (ACLs), shadow copies, and user quotas.• NTFS file systems that can be accessed from up to thousands of compute instances using the SMB protocol* High availability: replicates data within an Availability Zone (AZ)* Multi-AZ: file systems include active and standby file servers in separate AZs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the features of Amazon FSx for Lustre?

A
  • High-performance file system optimized for fast processing of workloads such as:• Machine learning• High-performance computing (HPC)• Video processing• Financial modeling• Electronic design automation (EDA)* Works natively with S3, letting you transparently access your S3 objects as files* Your S3 objects are presented as files in your file system, and you can write your results back to S3* Provides a POSIX-compliant file system interface
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

which is the file system with S3 integration?

A

FSx for Luster.* Works natively with S3, letting you transparently access your S3 objects as files* Your S3 objects are presented as files in your file system, and you can write your results back to S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is AWS Storage Gateway?

A

It’s for connecting the On-prem storage to AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is Amazon S3?

A
  • Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
  • The file stored in S3 is referred to as objects.
  • It is also seen as a database that stores object in key-value pairs. Object ID is the key, and the object is the value.
  • The buckets are defined at regional level
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

how Amazon S3 security is handled

A
  • IAM policy- which API calls should be allowed for a specific user from IAM
  • Resource based: Bucket Policies, Object Access Control List, Bucket Access Control List

Note: an IAM principal can access an S3 object if
* The user IAM permissions ALLOW it OR the resource policy ALLOWS it
* AND there’s no explicit DENY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

S3 JSON Based Policy?

A

JSON based policies
* Resources: buckets and objects
* Effect: Allow / Deny
* Actions: Set of API to Allow or Deny
* Principal: The account or user to apply the policy to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

s3 IAM policy for cross account access?

A

An S3 IAM policy for cross-account access allows one AWS account to access S3 buckets in another AWS account. Here’s an example of a policy that allows access to a specific bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAccessToBucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:root"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::example-bucket/*"
            ]
        }
    ]
}

In this example, replace “123456789012” with the account ID that you want to grant access to. The policy allows that account to read and write objects in the “example-bucket”. If you want to allow access to all buckets in the account, replace the “Resource” field with “arn:aws:s3:::*”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

what are the key points on hosting website on Amazon S3?

A
  • The website must be static website and it should be accessible on the Internet
  • The bucket should allow public reads in order for external users to access it’s content.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

can you version Amazon S3 files

A

Yes, It can be enabled at the bucket level. same key can be used to access the latest version of the object. Suspending versions does not delete the previous versions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

How to enable S3 replication?

A

You must enable versioning in the source and destination buckets to enable the application. There are two types of applications:
1. cross-region replication (CRR)
2. same region application (SRR)

  • The buckets can be in different accounts.
  • Copy is asynchronous.
  • Must give proper IAM permissions to S3
  • After you enable the replication, only new objects will be replicated. If you need to replicate, the existing objects use S3 batch replication.
  • for the delete operation, you can replicate a marker from source to target (optional setting). Deletion with version ID is not replicated.Use case: Compliance, lower latency access, replication across accounts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What happens when you enable replication in existing S3 bucket?

A

Only new objects will be replicated. Existing objects can be replicated using S3 batch replication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

How can you replicate existing objects in S3?

A

S3 batch replication can be used to replicate existing objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What options do you have for replicating delete operations in S3?

A

You can replicate a marker from source to target (optional setting). Deletion with version ID is not replicated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are some common use cases for S3 replication?

A

Compliance, lower latency access, and replication across accounts are some common use cases for S3 replication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How many S3 storage classes are there?

A
  1. S3 standard - General Purpose
  2. S3 standard - infrequent access (IA)
  3. S3 1 zone infrequent access
  4. S3 Glacier Instant Retrieval
  5. S3 Glacier Flexible Retrieval
  6. S3 Glacier deep archive
  7. S3 intelligent tieringObjects can move between classes manually or using S3 Lifecycle configurations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

what is S3 standard storage class?

A

It is used for frequently accessed data. It has low latency and high throughput. It can sustain two concurrent facility failures.

34
Q

what is S3 infrequent access?

A

Use for the data that is less frequently accessed but requires rapid access when needed. It is less expensive compared to the S3 standard.there are two infrequent options:1. amazon S 3 standard infrequent access. it is used for disaster recovery and backup N2. S3 one zone infrequent access is limited to a single AZ, and you lose the data if AZ is destroyed. It is used for storing secondary back copies of on-prem data or data you can recreate.

35
Q

What is Amazon S3 glacier storage classes?

A

It is low cost objective storage meant for archiving and backups. you pay for storage and object retrieval. there are three subclasses in glacier storage:
1. Amazon S3 glacier Instant Retrieval: It offers millisecond retrieval and it is great for data accessed once a quarter. the minimum storage duration is 90 days
2. Amazon s3 glacier flexible retrieval: the expedited retrieval is between 1 to 5 minutes, the standard retrieval is between 3 to 5 hours. For bulk the retrieval is in between 5 to 12 hours - it’s free. The minimum storage duration is 90 days.
3. Amazon S3 Glacier Deep Archive- for long term storage: the standard retrieval is is in 12 hours and the bulk is in 48 hours. minimum storage duration is of 180 days.

36
Q

What is S3 intelligent tiering?

A

Amazon S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change without performance impact or operational overhead.

37
Q

In which order objects can move from one tier to another?

A

you can transition objects between storage classes as shown in the diagram. The moment of objects can be done automatically by using lifecycle rules.

38
Q

what is Amazon S3 lifecycle rules?

A

An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:Transition actions – These actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after creating them, or archive objects to the S3 Glacier Flexible Retrieval storage class one year after creating them. For more information, see Using Amazon S3 storage classes.There are costs associated with lifecycle transition requests. For pricing information, see Amazon S3 pricing.Expiration actions – These actions define when objects expire. Amazon S3 deletes expired objects on your behalf.Lifecycle expiration costs depend on when you choose to expire objects. For more information, see Expiring objects.

39
Q

Is there a zero-day life cycle policy?

A

Yes, if you set a storage class equal to 0 days, information will be immediately sent to S3 Glacier. It is of use when information is rarely accessed in everyday life but its storage life is limited.Though, it might seem that uploading data to S3 first and go with it to Glacier afterward might be more expensive, AWS has ensured that this exact scenario leads to no more expenses than direct Glacier upload.

40
Q

What is requester pays?

A

In general, bucket owners pay for all Amazon S3 storage and data transfer costs that are associated with their bucket. However, you can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.The requestor cannot be anonymous and must be authenticated in AWS

41
Q

What is S3 event Notification?

A

You can use the Amazon S3 Event Notifications feature to receive notifications when certain events happen in your S3 bucket. To enable notifications, add a notification configuration that identifies the events that you want Amazon S3 to publish.The events can be objects removed, replicated, etc. The events can be filtered based on the object names.The events are sent to SNS, SQS lambda functions, and event bridge.Even bridge can call over 18 AWS services to handle the event.

42
Q

what is S 3 multipart upload?

A

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. it is recommended for files that is greater than 100 MB. It must be used for files greater than 5GB.

43
Q

What is S3 Transfer Acceleration?

A

Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet. S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications. S3TA improves transfer performance by routing traffic through Amazon CloudFront’s globally distributed Edge Locations and over AWS backbone networks, and by using network protocol optimizations.

44
Q

What is s3 byte range fetches?

A

Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion. You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. Fetching smaller ranges of a large object also allows your application to improve retry times when requests are interrupted. For more information, see Getting Objects.Typical sizes for byte-range requests are 8 MB or 16 MB.This can also help in accessing the header part of the file to gather the high level information about it.

45
Q

What is S3 select?

A

S3 Select is a feature of S3 that lets you specify targeted portions of an S3 object to retrieve and return to you rather than returning the entire contents of the object. You can use basic SQL expressions to select certain columns and filter for particular records in your structured file.It makes object information retrieval 400% faster and it’s 80% cheaper

46
Q

What is S3 batch operations?

A

S3 Batch Operations is a managed solution for performing storage actions like copying and tagging objects at scale, whether for one-time tasks or for recurring, batch workloads. S3 Batch Operations can perform actions across billions of objects and petabytes of data with a single request. The use cases are:1. copy objects between S3 buckets as a batch operation.2. something that can come up in exam, is to encrypt all the un-encrypted objects in your S3 buckets.3. Modify ACLs, or tags.4. Restore many objects at a time from S3 Glacier.5. Invoke a Lambda functionPerform whatever custom action you want on every object from your S3 Batch Operation. So, the idea is that you can do whatever operation you want on the list of objects.

47
Q

What is S3 SSE-S3 Encryption?

A

When you use server-side encryption with Amazon S3 managed keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a root key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256) GCM, to encrypt your data.

48
Q

What is Amazon S3 SSE - KMS encryption?

A

The encryption is done by using a key managed by AWS KMS. With KMS, you get the advantage of user control audit key usage using cloud trail. The object is encrypted on the server side.If you are using SSE KMS, you may be impacted by KMS limit. When you upload object, it calls GenerateDataKey KMS API. when you download it calls decrypt KMS API. Every call is counted toward KMS quota per second (5500, 10000, 30000 req/s based on region)

49
Q

What is Amazon S3 client-side encryption?

A

Client-side encryption is the act of encrypting your data locally to ensure its security as it passes to the Amazon S3 service. The Amazon S3 service receives your encrypted data; it does not play a role in encrypting or decrypting it.

50
Q

How to enforce S3 Encryption?

A

One way to enforce encryption is to use a bucket policy. The bucket policy will refuse any PUT API call That does not have the specified encryption header.Another way is to use default encryption option in S3

51
Q

What is cross origin resource sharing (CORS)?

A

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

52
Q

what is s3 MFA delete?

A

MFA delete can help prevent accidental bucket deletions by requiring the user who initiates the delete action to prove physical possession of an MFA device with an MFA code and adding an extra layer of friction and security to the delete action.MFA will be required to:1. permanently delete an object version.2. suspend versioning on the bucketTo use MFA delete, versioning must be enabled on the bucketOnly bucket owners can enable or disable MFA delete.

53
Q

What is S3 access log?

A

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits.Never ever save the access log on the same bucket.

54
Q

What is amazon s3 pre-signed URL?

A

Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them:1. Simple, occasional sharing of private files.2. Frequent, programmatic access to view or upload a file in an applicationExamples:1. Only logged-in users can download a premium video from your S3 bucket.2. allow an ever-changing list of users to download files by generating URLs dynamically.3. Allow a user to temporarily upload a file to a precise location in your S3 bucket.

55
Q

What is S3 Glacier Vault Lock?

A

S3 Glacier Vault Lock helps you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a Vault Lock policy. You can specify controls such as “write once read many” (WORM) in a Vault Lock policy and lock the policy from future edits.

56
Q

What is S3 Glacier Vault Lock?

A

S3 Glacier Vault Lock helps you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a Vault Lock policy. You can specify controls such as “write once read many” (WORM) in a Vault Lock policy and lock the policy from future edits.

57
Q

useWhat is S3 object lock?

A

With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.There are two types of retention modes.
1. Compliance:
1. Object versions can’t be overwritten or deleted by any user, including the root user
2. Objects retention modes can’t be changed, and retention periods can’t be shortened
2. Governance:
1. Most users can’t overwrite or delete an object version or alter its lock settings
2. Some users have special permissions to change the retention or delete the objectRetention Period:1. Retention Period: protect the object for a fixed period. It can be extended2. Legal Hold:* protect the object indefinitely, independent from the retention period* can be freely placed and removed using the s3:PutObjectLegalHold IAM permission

58
Q

What is S3 access points?

A

Customers increasingly use Amazon S3 to store shared data sets, where data is aggregated and accessed by different applications, teams and individuals, whether for analytics, machine learning, real-time monitoring, or other data lake use cases. Managing access to this shared bucket requires a single bucket policy that controls access for dozens to hundreds of applications with different permission levels. As an application set grows, the bucket policy becomes more complex, time consuming to manage, and needs to be audited to make sure that changes don’t have an unexpected impact on another application.Amazon S3 Access Points, a feature of S3, simplify data access for any AWS service or customer application that stores data in S3. With S3 Access Points, customers can create unique access control policies for each access point to easily control access to shared datasets.

59
Q

What is SNOW Cone?

A

AWS Snowcone
* AWS Snowcone is portable, rugged, and secure that provides edge computing and data transfer devices.
* Snowcone can be used to collect, process, and move data to AWS, either offline by shipping the device, or online with AWS DataSync.
* AWS Snowcone stores data securely in edge locations, and can run edge computing workloads that use AWS IoT Greengrass or EC2 instances.
* Snowcone devices are small and weigh 4.5 lbs. (2.1 kg), so you can carry one in a backpack or fit it in tight spaces for IoT, vehicular, or even drone use cases.
* Snow Cone: 8 TB of HDD or 14 TB of SSD storage
* Use Snowcone where Snowball does not fit (space constrained environment)
* You must provide battery / cables
* Can be sent back to AWS offline, or connect it to internet and use AWS DataSync to send data

60
Q

What is Snowball Edge?

A

AWS Snowball Edge
* AWS Snowball Edge is a data migration and edge computing device that comes in two device options:
* Compute Optimized: Snowball Edge Compute Optimized devices provide 52 vCPUs, 42 TB or 28 TB of usable block or object storage, and an optional GPU for use cases such as advanced machine learning and full-motion video analysis in disconnected environments.
* Storage Optimized.
* Snowball Edge Storage Optimized devices provide 40 vCPUs of compute capacity coupled with 80 terabytes of usable block or S3-compatible object storage.
* It is well-suited for local storage and large-scale data transfer.
* Customers can use these two options for data collection, machine learning and processing, and storage in environments with intermittent connectivity (such as manufacturing, industrial, and transportation) or in extremely remote locations (such as military or maritime operations) before shipping it back to AWS.
* Snowball devices may also be rack-mounted and clustered together to build larger, temporary installations.

61
Q

What is SnowMobile?

A
  • AWS Snowmobile moves up to 100 PB of data in a 45-foot long ruggedized shipping container and is ideal for multi-petabyte or Exabyte-scale digital media migrations and data center shutdowns.
  • A Snowmobile arrives at the customer site and appears as a network-attached data store for more secure, high-speed data transfer.
  • After data is transferred to Snowmobile, it is driven back to an AWS Region where the data is loaded into S3.
  • Snowmobile is tamper-resistant, waterproof, and temperature controlled with multiple layers of logical and physical security – including encryption, fire suppression, dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an escort security vehicle during transit.
62
Q

What is AWS Edge Computing?

A
  • Edge computing involves processing data at an edge location while it is being created. An edge location is a location that may not have internet connectivity or access to computing power. Examples of edge locations include trucks on the road, ships on the sea, and mining stations on the ground.
  • Edge computing is necessary to perform computation and data processing at these locations.
  • To do edge computing, AWS provides Snowball Edge or Snowcone devices that can be embedded into edge locations.
  • Use cases of edge computing include pre-processing data, doing machine learning at the edge, and transcoding media streams in advance.
  • Data can be transferred back to AWS by shipping the Snowball Edge or Snowcone device back.
  • Edge computing involves processing data very close to where it is being created.
63
Q

Options for Edge Computing?

A

**Snowcone (smaller) **
* 2 CPUs, 4 GB of memory, wired or wireless access
* USBC power using a cord or the optional battery

Snowball Edge - Compute Optimized
* 52 vCPUs, 208 GiB of RAM
* Optional GPU (useful for video processing or machine learning)
* 42 TB usable storage

Snowball Edge - Storage Optimized
* Up to 40 vCPUs, 80 GiB of RAM
* Object storage clustering available

All: Can run EC2 Instances & AWS Lambda functions (using AWS IoT Greengrass)
Long term deployment options: 1 and 3 years discounted pricing

64
Q

What is AWS OpsHub?

A

AWS OpsHub is an application that you can download and install on any Windows or Mac, such as a laptop. AWS OpsHub is a graphical user interface you can use to manage your AWS Snowball devices, enabling you to rapidly deploy edge computing workloads and simplify data migration to the cloud. With just a few clicks in AWS OpsHub, you have the full functionality of the Snowball devices at your fingertips; you can unlock and configure devices, drag-and-drop data to devices, launch applications, and monitor device metrics.

65
Q

Can a snowball be imported into glacier?

A

Snowball cannot be imported into Glacier directly. You must import it into S3 first and then use lifecycle policy to move the data to Glacier.

66
Q

What is Amazon FSx?

A

Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx is built on the latest AWS compute, networking, and disk technologies to provide high performance and lower TCO. And as a fully managed service, it handles hardware provisioning, patching, and backups – freeing you up to focus on your applications, your end users, and your business.

67
Q

What is Amazon FSx for windows file server?

A
  1. FSx for Windows is a fully managed Windows file system share drive2. Supports SMB protocol & Windows NTFS3. Microsoft Active Directory integration, ACLs, user quotas4. Can be mounted on Linux EC2 instances5. Supports Microsoft’s Distributed File System (DFS) Namespaces (group files across multiple FS)6. Scale up to 10s of GB/s, millions of IOPS, 100s PB of data7. Storage Options:* SSD – latency sensitive workloads (databases, media processing, data analytics, …)* HDD – broad spectrum of workloads (home directory, CMS, …)* Can be accessed from your on-premises infrastructure (VPN or Direct Connect)* Can be configured to be Multi-AZ (high availability)* Data is backed-up daily to S3
68
Q

What is AWS FSx For Luster?

A

Lustre is a type of parallel distributed file system, for large-scale computing1. The name Lustre is derived from “Linux” and “cluster2. * Machine Learning, High Performance Computing (HPC) - TIP for Exam3. Video Processing, Financial Modeling, Electronic Design Automation4. Scales up to 100s GB/s, millions of IOPS, sub-ms latencies5. Storage Options:* SSD – low-latency, IOPS intensive workloads, small & random file operations* HDD – throughput-intensive workloads, large & sequential file operations6. Seamless integration with S37. Can “read S3” as a file system (through FSx)8. Can write the output of the computations back to S3 (through FSx)9. Can be used from on-premises servers (VPN or Direct Connect)

69
Q

What is AWS Storage Gateway?

A

AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Storage Gateway provides a standard set of storage protocols such as iSCSI, SMB, and NFS, which allow you to use AWS storage without rewriting your existing applications.1. disaster recovery2. backup & restore3. tiered storage4. on-premises cache & low-latency files access

70
Q

What are different types of Storage Gateways?

A

S3 File GatewayFSx File GatewayVolume GatewayTape Gateway

71
Q

What is aws s3 file gateway?

A
  1. Configured S3 buckets are accessible using the NFS and SMB protocol2. Most recently used data is cached in the file gateway3. Supports S3 Standard, S3 Standard IA, S3 One Zone A, S3 Intelligent Tiering4. Transition to S3 Glacier using a Lifecycle Policy5. Bucket access using IAM roles for each File Gateway6. SMB Protocol has integration with Active Directory (AD) for user authentication
72
Q

What is Amazon FSx File Gateway?

A
  1. Native access to Amazon FSx for Windows File Server2. Local cache for frequently accessed data3. Windows native compatibility (SMB, NTFS, Active Directory…)4. Useful for group file shares and home directories
73
Q

What is Amazon FSx File Gateway?

A
  1. Native access to Amazon FSx for Windows File Server2. Local cache for frequently accessed data3. Windows native compatibility (SMB, NTFS, Active Directory…)4. Useful for group file shares and home directories
74
Q

What is Volume Gateway?

A
  1. Block storage using iSCSI protocol backed by S32. Backed by EBS snapshots which can help restore on-premises volumes!3. Cached volumes: low latency access to most recent data4. Stored volumes: entire dataset is on premise, scheduled backups to S3
75
Q

What is Tape Gateway?

A
  1. Some companies have backup processes using physical tapes (!)2. With Tape Gateway, companies use the same processes but, in the cloud3. Virtual Tape Library (VTL) backed by Amazon S3 and Glacier4. Back up data using existing tape-based processes (and iSCSI interface)5. Works with leading backup software vendors
76
Q

What is AWS Transfer Family?

A

AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services.AWS Transfer Family supports transferring data from or to the following AWS storage services.1. Amazon Simple Storage Service (Amazon S3) storage. For information about Amazon S3, see Getting started with Amazon Simple Storage Service.2. Amazon Elastic File System (Amazon EFS) Network File System (NFS) file systems. For information about Amazon EFS, see What Is Amazon Elastic File System?AWS Transfer Family supports transferring data over the following protocols:1. Secure Shell (SSH) File Transfer Protocol (SFTP): version 32. File Transfer Protocol Secure (FTPS)3. File Transfer Protocol (FTP)4. Applicability Statement 2 (AS2)

77
Q

What can preserve the meta data of the file while moving them from one location to another?

A

AWS DataSync

78
Q

What is Amazon DataSync?

A

AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. It can also help in migrating the files from other cloud to AWS. The replication can be scheduled for hourly, daily or weekly. It need an agent to be installed on the source system. DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems, Amazon FSz for OpenZFS file systems, and Amazon FSx for NetApp ONTAP file systems.

79
Q

Does EFS has a life cycle management?

A

Amazon EFS lifecycle management automatically manages cost-effective file storage for your file systems. When enabled, lifecycle management migrates files that have not been accessed for a set period of time to the EFS Standard–Infrequent Access (Standard-IA) or One Zone–Infrequent Access (One Zone-IA) storage class, depending on your file system. You define that period of time by using the Transition into IA lifecycle policy.

80
Q

Can CloudFront be used for uploading the files on S3?

A

If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance. The given use case has data larger than 1GB and hence S3 Transfer Acceleration is a better option.