Storage and Databases Flashcards

1
Q

AWS Snowball Edge data migration

A

Enable Petabyte scale offline data migration fron onpremises storage for databases into amazon s3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

gp2 and gp3 EBS volume types

A

General purpose SSD that balances price and performance.

Can be used as boot volumes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

io1 and io2 Block Express EBS volume types

A

Highest performance SSD volumes for mission critical low latency or high throughput.

Can be used as boot volumes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

st1 EBS volume types

A

Low cost HDD volume for frequently accessed, throughput intensive workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

sc1 EBS volume types

A

Lowest cost HDD volume designed for less frequently accessed workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

EBS Snapshots

A
  • Are incremental and use IO (shouldn’t be runned while app is handling a lot of traffic).
  • Not necessary to detach volume.
  • Can copy across regions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Amazon Data Lifecycle Manager

A

Automate creation, retention and deletion of EBS snapshots and EBS backed AMIs.

Uses tags to identify resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

EBS Multi Attach

A

Only avaible for io1/io2 family. Attach the same volume to multiple instances in the same AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Amazon EFS

A

Managed NFS than can be mounted on Linux multi-AZ EC2 & on premises.

Can only attach to one VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

EFS perfomance mode

A
  1. General Purpose (default): latency sensitive use cases
  2. Max I/O: higher latency, throughput, highly parallel
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

EFS Throughput Mode

A
  1. Bursting: scales with the amount of storage in your file system
  2. Provisioned: if you know your workload’s performance requirements, regardless of storage size
  3. Elastic: Automatically scales based on your workload
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

EFS storage classes

A
  1. Standard: frequently accessed files
  2. Infrequent access: cost to retrieve files, lower price to store
  3. Archive: rarely accessed data (50% cheaper)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

EFS Access Points

A

Application-specific entry points that make it easier to manage access to shared datasets.
Enforce user identity, clients can only access data in the specified directory or its subdirectories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

EFS Cross Region Replication

A

Can be setup for new or existing EFS.
1. Provides RPO and RTO of minutes
2. Doesn’t affect the provisioned throughput

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Storage Classes

S3 Standard

A
  1. General purpose storage for frequently accessed data
  2. Low latency and high throughput performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Storage Classes

S3 Intelligent-Tiering

A
  1. Automatic cost savings for data with unknown or changing access patterns
  2. Opt-in asynchronous archive capabilities for objects that become rarely accessed
  3. Small monthly monitoring and automation charge
  4. No operational overhead, no lifecycle charges, no retrieval charges, and no minimum storage duration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Storage Classes

S3 Express One Zone

A
  1. High performance storage for your most frequently accessed data
  2. Consistent single-digit millisecond request latency
  3. Improve access speeds by 10x and reduce request costs by 50% compared to S3 Standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Storage Classes

S3 Standard Infrequent Access

A
  1. Infrequently accessed data that needs millisecond access
  2. Same low latency and high throughput performance of S3 Standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Storage Classes

S3 One Zone-Infrequent Access

A
  1. Re-creatable infrequently accessed data
  2. Same low latency and high throughput performance of S3 Standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Storage Classes

S3 Glacier Instant Retrieval

A
  1. Long-lived data that is accessed a few times per year with instant retrievals
  2. Data retrieval in milliseconds with the same performance as S3 Standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Storage Classes

S3 Glacier Flexible Retrieval

A
  1. Backup and archive data that is rarely accessed and low cost
  2. Ideal for backup and disaster recovery use cases when large sets of data occasionally need to be retrieved in minutes, without concern for costs
  3. Configurable retrieval times, from minutes to hours, with free bulk retrievals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Storage Classes

S3 Glacier Deep Archive

A
  1. Archive data that is very rarely accessed and very low cost
  2. Ideal alternative to magnetic tape libraries
  3. Retrieval time within 12 hours
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

S3 Replication

A

Versioning must be enabled
* Cross region replication
* Same region replication
* S3 replication time control: Replicates most objects in seconds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

S3 Baseline Performance

A

3500 PUT and 5500 GET requests per second per prefix in a bucket

25
Q

S3 multi-part upload

A

Upload object parts independently, and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts.

Must use for files bigger than 5GB.

26
Q

S3 Transfer Acceleration

A

Increase transfer speed by using AWS edge location which will forward the data to the bucket

27
Q

S3 Byte Range Fetches

A

Parallelize GETs by requesting specific byte ranges. Better resilience in case of failures.

28
Q

S3 Analytics

Storage Class Analysis

A

Help you decide when to transition objects to Standard and Standard IA. Report is updated daily.

29
Q

S3 Storage Lens

A

Discover anomalies, identify cost efficiencies, and apply data protection best practices across entire AWS Organization.

30
Q

S3 Storage Lens - Metrics

A
  • Summary Metrics
  • Cost Optimization Metrics
  • Data Protection Metrics
  • Access Management Metrics
  • Event Metrics
  • Performance Metrics
  • Activity Metrics
  • Detailed Status Code Metrics
31
Q

S3 Storage Lens - Free vs Paid

A

Free metrics:
1. Available for all customers
2. Data is available for 14 days

Advanced metrics:
1. CloudWatch publishing
2. Prefix aggregation
3. Data is available for 15 months

32
Q

Amazon FSx

A

Launch 3rd party high performance file systems
* You cannot decrease storage capacity. use DataSync to migrate to smaller FSx.

33
Q

Amazon FSx for Windows

A
  • Supports SMB protocol & Windows NTFS
  • Integration with AD, ACLs and user quotas
  • Can be mounted on Linux EC2 instances
  • Can group files acros multiple FS with Microsoft DFS Namespaces.
  • Can be accessed from on-premises
  • Can be configured MultiAZ
  • Data is backed up daily to S3
34
Q

Amazon FSx for Lustre

A
  • Lustre is derived from Linux and cluster
  • Used for High Performance Computing
  • Seamless integration with S3
  • Can be used from on-premises
  • Data Lazy Loading: Only the data that is processed is loaded
35
Q

FSx File System Deployment Options

A

Scratch File System: Temporary storage with high burst
Persisten File System: Data is replicated within same AZ

36
Q

Amazon FSx for NetApp ONTAP

A
  • Compatible with NFS, SMB and iSCSI protocol
  • Move workloads to AWS
  • Storage shrinks or grows automatically
  • Point in time instanteneous cloning
37
Q

Amazon FSx for OpenZFS

A
  • Compatible with NFS
  • Move workloads to AWS
  • Point in time instanteneous cloning
38
Q

AWS DataSync

A

Replicate large amount of data hourly, daily or weekly to or from:
* S3
* EFS
* FSx

File permissions and metadata are preserved

39
Q

AWS Data Exchange

A

Find third party data in the cloud, subscribe to, load data into S3/Redshift, and use analyze it

40
Q

AWS Transfer Family

A

File transfers into and out of S3 or EFS using FTP, FTPS or SFTP protocols.

41
Q

AWS Transfer Family - Endpoint Types

A
  1. Public endpoints: IPs managed by AWS subject to change
  2. VPC Endpoint with Internal access: Static private IPs, set allow lists (SGs & NACL)
  3. VPC Endpoint with internet facing access: Static private and public IPs (EIPs)
42
Q

DynamoDB Indexes

A
  • Local secondary index: The same primary key and an alternative sort key. Must be defined at table creation time.
  • Global secondary index: Change the primary key and optional sort key. Can be defined after the table is created.
43
Q

DynamoDB Streams

A

React to chenges to DynamoDB tables in real time with AWS Lambdas or EC2.

44
Q

DynamoDB Accelerator (DAX)

A

Seamless cache, microsecond latency for reads & queries.

45
Q

Amazon OpenSearch

A

Provide search and indexing capabilites

46
Q

RDS Engines

A
  • PostgreSQL
  • MySQL
  • MariaDB
  • IBM DB2
  • Oracle
  • SQL Server
47
Q

RDS Multi AZ

A

Standby instance for failover in case of outage. One DNS name for writes and reads that automatically faiolvers to the standby instance.

48
Q

RDS Read Replicas

A
  1. Increase read throughput.
  2. Eventual consistency
  3. Cross-region
  4. Distribute reads across replicas with Route53 weighted record set
49
Q

RDS IAM Authentication

A
  • Works with MariaDB, MySQL and PostgreSQL
50
Q

RDS Oracle backups

A
  • RDS backups to restore to Amazon RDS for Oracle
  • Oracle RMAN (Recovery Manager) to restore to non RDS.
51
Q

Real Appliation Clusters (RAC)

A
  • RDS for Oracle does not support
52
Q

RDS

mysqldump

A

Migrate a MySQL RDS to non RDS.

53
Q

Aurora

A
  • Compatible with PostgreSQL and MySQL
  • Automatically grows up to 128 TB
  • Reader endpoint to access up to 15 read replicas
  • Cross region read replicas copies the entire DB
54
Q

Aurora High Availability

A

It has 6 copies across 3 AZ:
1. 4 copies needed for writes
2. 3 copies needed for reads
3. Self healing

55
Q

Amazon Aurora global databases

A

Set up multiple read Aurora clusters that span up to 5 regions (automatic sincronizing changes made in primary cluster)

56
Q

Aurora global write forwarding

A

Enables secondary cluster to forward SQL statements that perform write operations to the primary cluster

57
Q

Aurora Endpoints

A
  1. Cluster endpoint: writer endpoint that connects to the current primary DB instance
  2. Reader enpoint: provides load balancing for read connections to all replicas
  3. Custom endpoints: instances that you choose in the cluster
  4. Instance endpoint
58
Q

Aurora Serverless

A

Automated database instantiation and auto scaling. Good for infrequent, intermittent or unpredictable workloads.

59
Q

Convert RDS to Aurora

A
  1. Create an RDS snapshot an restore to an Aurora instance
  2. Create and Aurora Read Replica from an RDS instance and promote it to an Aurora instance