Elastic Compute Cloud (EC2) Basics Flashcards

1
Q

Virtualization 101

A

Virtualization is the process of running one or more operating systems in a piece of physical hardware.

Before virtualization, the architecture looks something like this:

A Server with a collection of physical resources, CPU & MEM, Network card and Devices. On top of this, runs an O.S, that O.S runs with a special level of access to the hardware, called “Priviledge Mode” (a small part of the O.S = Kernel)

-The kernel is the only part of the O.S, capable of interacting with the hardware.
-The O.S can allow other software to run, such as Applications, but these run on “User Mode” or “Unpriviledge Mode”. (They cannot directly interact with the hardware, they’ll have to go through the O.S) They’ll need to make a “System Call”.

Virtualization Architecture

Each O.S is separate, each runs it’s own Applications. But the CPU can only have one thing running as “Priviledge Mode”, a priviledge process RAM, has direct access to the hardware, and all of these O.S, if they’re running in their own modified state, they expect to be running on their own, on “Priviledge State”.

-Trying to run three or more different O.S, in this way, will cause system crashes.

Virtualization was created as a solution to this problem, allowing multiple different priviledge applications to run, on the same hardware. Initially virtualization, was really inefficient, because the hardware wans’t aware of it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Methods of Performing Virtualization

A

-Emulated Virtualization (Software Virtualization)

This method hosts O.S running on the hardware, and it included different capabilities known as a “Hypervisor”. The software ran on “Priviledge Mode”, it had full access to hardware on the host server. Around the multiple other O.S, we’ll now refered it to “Guest O.S”, were wrapped in a container of sorts, called “Virtual Machines”.

Each VM, was an unmodified O.S, such as Windows or Linux, with a virtual allocation of resources, such as CPU, Memory, and a local disk space. VM’s also had devices mapped into them, such as Network cards, Graphics card, and other local devices, such as Storage. The Guest O.S, believed these to be real, just like physical devices, but they weren’t real, just Emulated hardware by the Hipervisor.

Since the Guest O.S, believes that they are running on real hardware, so they still attempt to make “Priviledge Calls”, they try to take control of the CPU, try to directly read/write, what they think of memory and disk, which are NOT real.

The Hypervisor, it performs a process called “Binary Translation”, any priviledge operation, are intercepted and translated on the fly in software, by the Hypervisor.

  • Para-Virtualization

With Para-Virtualization, the Guest O.S, is still running in same VM container, with virtual resources allocated to them, but instead of the slow “Binary Translation”, another approach is used. Para-Virtualization only works on a small subset of O.S, O.S which can be modified.

Because with Para-Virtualization, there are areas of the Guest O.S, which attempt to make priviledge calls, and these are modified, they are modified to make them -usicalls-, but instead of directly calling on the hardware, they call to the Hypervisor, called “HyperCalls”.

So areas of the O.S, which would traditionally make “Priviledge Calls” directly to the hardware, are modified, so the source code of the O.S is changed, to call the Hypervisor rather than the Hardware. (This makes the Hardware aware)

-Hardware Assisted Virtualization

With this, the hardware itself become “Virtualization Aware”, the CPU contains specific instructions and capabilities, so the Hypervisor can directly control and configure, the support. So the CPU itself is aware of performing virtualization.

When Guest O.S attempt to run any priviledge instructions, they are trapped by the CPU, which knows to expect them from these Guest O.S’s, so the system as a whole, doesn’t hault. They are re-directed by the hardware, to the Hypervisor, and he handles how these are executed.

–Easier to obtain better performance

–I/O operations can impact performance (Since there’s only 1 network card)

  • Single Root I/O Virtualization (SR-IOV)

SR-IOV allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions.

SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack. Because the VF is assigned to a child partition, the network traffic flows directly between the VF and child partition.

–As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in nonvirtualized environments.

–In EC2, this is “Enhanced Networking”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

EC2 Architecture and Resilience

A

-EC2 Instances are virtual machines (OS+Resources)

-EC2 Instances run on EC2 Hosts, and these are physical service hardware, which AWS manages.

-These are Shared Hosts or Dedicated Hosts

–Shared Hosts = Are hosts which are shared with different AWS customers, so you don’t get any ownership of the hardware, and pay for the individual instances on how long you run them for and what resources you have allocated. (Every customer is isolated from each other)

–Dedicated Hosts = You are paying for the entire hosts, not the instance which run on it, it’s dedicated to your account and you don’t have to share it with other customers.

-Hosts = 1 AZ - AZ Fails, Hosts Fails, Instances Fail. (AZ Resilient Service)

-Local storage = Instance Store (also AZ Resilient)

-Remote storage = EBS (also AZ Resilient)

-If you restart the instance, they stay on the same hosts.

They don’t stay if:

-The hosts fails or taken down for maintenance.
-If the instance is stopped and then started.

If any of these, happen, the instance will be relocated to another hosts in the same AZ.

-You cannot connect an instance with a volume in another AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s EC2 good for?

A

-Traditional OS+Application Compute

-Long-Running Compute

-Server style applications…

-Perfect for services/apps that need burst or steady-state load.

-Monolithic application stacks (Database/Middleware..)

-Migrated application workloads or Disaster Recovery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

EC2 Instances Types

A

When you choose an EC2 Instance type, you are doing so, to influence different things.

-Raw CPU, Memory, Local Storage Capacity & Type

-Resource Ratios - Some type of instances give you of one, than the other.

-The amount of Storage and Data Network Bandwidth.

-System Architecture / Vendor - Intel/AMD ….

-Additional Features and Capabilities - GPUs, FPGAs..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

EC2 Categories

A

-General Purpose - Default - Diverse workloads, equal resource ratio.

-Compute Optimized - Media Processing, HPC, Scientific Modelling, Gaming, Machine Learning.

-Memory Optimized - Processing large in-memory datasets, some database workloads.

-Accelerated Computing - Hardware GPU, flied programmable gate arrays (FPGAs).

-Storage Optimized - Sequential and Random IO - scale-out transactional databases, data warehousing, Elasticsearch, analytics workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Decoding EC2 Types

A

R5dn.8xlarge > Instance Type

R = Instance Family

5 = Instance Generation

dn = Additional Capabilities (This may vary)

8xlarge = Instance Size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Storage Key Terms

A

-Direct (local) attached Storage - Storage on the EC2 Host (Instance Store)

–Generally super fast
–If the hardware/disk fails, the storage can be lost
–If an EC2 Instance moves between hosts, the storage can be lost

-Network attached Storage - Volumes delivered over the network (EBS)

–In on-premises environments, this uses protocols such as ISCSI, or fiber channel. (In AWS - EBS)
–Highly resilient
–Separate from the instance hardware, so the storage can survive issues, which impact the EC2 hosts.

-Ephemeral Storage - Temporary Storage (Instance Store )

–You can’t rely on to be persistent

-Persistent Storage - Permanent Storage - lives on past the lifetime of the instance (EBS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Three main categories of storage available within AWS

A

-Block Storage - Volume presented to the OS as a collection of blocks… no structure provided. MOUNTABLE, BOOTABLE.

–If you want a storage to boot from.
–If you want to utilize high performance storage inside an O.S

-File Storage - Presented as a file share.. has structure. MOUNTABLE, NOT BOOTABLE.

-If you want to share a file system, across multiple different servers or clients, or have them accessed by different services.

-Object Storage - collection of objects, flat structure. NOT MOUNTABLE, NOT BOOTABLE.

–Scalable, it can be accessed by thousands or millions of perople simultaneously
–If you want large access to read and write object data at scale. (Webscale application)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Storage Performance

A

IO Size x IOPS = Throughput

-IO (Block) Size = Is the size of the blocks of data that you’re writing to disk, expressed in kilobytes or megabytes (K/MB)

-Input/Output Operations per Second (IOPS) = Measures the number of IO operations the storage system can support in a second.

-Throughput = Is the rate of data a storage system can store on a particular piece of storage (MB/s)

If you want to maximize your throughput, you need to use the right block size and then maximize the IOPS, if either of these three are limited, it can impact the other two.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Elastic Block Store (EBS)

A

-Is a service which provides Block Storage - raw disk allocations (volume) - Can be encrypted using KMS

-EC2 instances see block device and create file system on this device (ext3/4, xfs)

-Storage is provisioned in ONE AZ (Resilient in that AZ)

-You create a volume and generally attach it, to ONE EC2 instance (or other service) over a storage network.

-Can be detached and reattached, EBS volumes are not linked to the instance lifecycle of one instance, they are persistent.

-Snapshot (backup) into S3 - Create volume from snapshot (migrate between AZs)

-EBS can provision volumes based on different physical storage types, different sizes, different performance profiles.

-Billed based on GB-month. (and in some cases performance)

-EBS replocates within an AZ - Failure of an AZ means failure of a volume.

-Snapshots copied across regions provide global resilience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

EBS Volume Types - General Purpose SSD - GP2

A

-Great for Boot volumes, low-latency interactive apps, dev & test.

-Volumes can be as small as 1GB, or as large as 16TB.

-When they are created, the volume is created with an IO Credit. (An IO Credit is 16KB IOPS assume 16KB - 1 IOPS is 1 IO in 1 second.)

-IO “Credit” Bucket - Capacity 5,4 million IO Credits - Fills of rate of Baseline Performance.

-Bucket Fills with min 100 IO Credits per second - Regardless of volume size.

-Beyond the 100 minimum the bucket fills with 3 IO credits per second, per GB of volume size (Baseline Performance)

Means that a 100GB Volume, gets 300 IO Credits per second, refilling the bucket. (Depends on the volume)

-By default, GP2 can burst up to 3,000 IOPS by depleting the bucket.

-All volumes get an initial 5,4 million IO credits, 30 minutes @ 3,000 IOPS - Great for boots and initial workloads.

If you are consuming more IO credits, than the rate that your bucket is refilling, then you are depleting the bucket.

-Volumes larger than 1,000GB (1TB) - Baseline is above burst. Credit system isn’t used & you always achieve Baseline. (Up to maximum for GP2 of 16,000 IO credits per second (Baseline Performance))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

EBS Volume Types - General Purpose SSD - GP3

A

-Useful for: Virtual desktops, medium sized single instance databases such as MSSQL Server and Oracle DB, low-latency interactive apps, dev & test, boot volumes.

-Removes the credit bucket architecture of GP2.

-Every GP3 volume, regardless of size, starts with a STANDARD 3,000 IOPS & it can transfer 125 MiB/s.

-Volumes can be as small as 1GB, or as large as 16TB.

-GP3 is CHEAPER (~20%) Base price.

-If you need more performance, you can pay extra for up to 16,000 IOPS or 1,000 MiB/s.

-4x Faster Max throughput vs GP2 - 1,000 MiB/s vs 250 MiB/s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

EBS Volume Types - Provisioned IOPS SSD (io1/2)

A

-High performance, latency sensitive workloads, I/O-intensive NoSQL & relational databases.

-Designed for consistent low latency & jitter. **

-When you need smaller volumes and super high performance.

-With io1/2/BlockExpress IOPS can be adjusted Independently of size of the volume.

-Up to 64,000 IOPS per volume (4x GP2/3)
-Up to 256,000 IOPS per volume (Block Express)
-Up to 1,000 MB/s throughtput
-Up to 4,000 MB/s throughtput (Block Express)

-Volumes sizes:
–4GB-16TB io1/2
–4GB-64TB BlockExpress

-Performance ratio:
–io1 50IOPS/GB (MAX)
–io2 500IOPS/GB (MAX)
–BlockExpress 1000IOPS/GB (MAX)

-Per Instance Performance:
–Influenced by the type of volume, Type of instance, and Size of the instance.
–This maximums are more than a single EBS volume can provide, so you’re going to need multiple volumes.

–io1 - 260,000 IOPS & 7,500MB/s
–io2 - 160,000 IOPS & 4,750MB/s
–Block Express - 260,000 IOPS & 7,500MB/s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

EBS Volume Types - HDD-Based

A

There’s 2 types of EBS HDD-based:

-Throughput Optimized (st1)

–Designed when cost is a concern, but you need frequent access to storage throughput intensive workloads (Big data, data warehouses, log processing)
–Cheaper than the SSD volumes.
–Designed for data, which is sequentially accessed, since it’s HDD-based, it’s not great at random access, more designed for data that needs to be written or read, in a fairly sequential way.
–Range from 125GB to 16TB.
–Max of 500 IOPS (1MB IO) - means MAX 500 MB/s.
–40MB/s per TB Base.
–250MB/s per TB Burst.

-Cold HDD (sc1)

–Designed for infrequent workloads, is geared towards maximum economy, when you just want to store lots of data and don’t care about performance.
–Cheaper than st1
–Max of 250 IOPS (1MB IO) - means MAX 250 MB/s.
–12MB/s per TB Base.
–80MB/s per TB Burst.
–Range from 125GB to 16TB.
–Lowest cost HDD volume designed for less frequently accessed workloads.
–Colder data/archives requiring fewer scans per day.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Instance Store

A

-Provide Block Storage Devices.

-Physically connected to one EC2 Host.

-Instances on that host can access them.

-Highest storage performance in AWS.

-Included in instance price.

-ATTACHED ONLY AT LAUNCH NOT AFTER.

-Each instance can have a collection of volumes, which are backed by physical devices on the EC2 host.

-If an EC2 instance moves between hosts, they can’t access the old instance store, since they are given a new one. (Stopped/Started - UnderMaintenance - If you change instance type - hardware failure)

You can achieve more higher levels of performance throughput, and more IOPS, using Instance Store volumes, than using EBS.

-If you use a D3 Instance = 4.6 GB/s throughput.
-f you use a I3 Instance = 16 GB/s of sequential throughput

-More IOPS and Throughput vs EBS ***

17
Q

EXAM POWERUP - Instance Store

A

-Local on EC2 Host.

-Add at launch ONLY.

-Lost on instance move, resize or hardware failure.

-Highest storage performance in AWS.

-You pay for it anyway - included in instance price.

-TEMPORARY.

18
Q

Choosing between Instance Store & EBS

A

-Persistence - EBS
-Resilience - EBS (AZ)
-Storage isolated from instance lifecycle - EBS

-Resilience w/ App In-built Replication - It depends
-High performance needs - It depends

-Super high performance needs - Instance Store
-Cost - Instance Store (It’s often included)

-Cheap = ST1 or SC1
-Throughput or streaming - ST1
-Boot - NOT ST1 or SC1

-GP2/3 - up to 16,000 IOPS
-IO1/2 - up to 64,000 IOPS (*256,000)
-You can take lots of individual EBS volumes, and you can create a RAID volume set, up to 260,000 IOPS (io1/2-BE/GP2/3) (Combined performance of those EBS volumes) (Maximum possible IOPS per instance)

-More than 260,000 IOPS - INSTANCE STORE

19
Q

Snapshots, Restore & Fast Snapshot Restore (FSR)

A

-They are an efficient way to backup EBS volumes to S3. (Protect the data from AZ failures or Local storage failure)

-Snapshots are incremental volume copies to S3. (They become region resilient in S3)

-The first is a FULL COPY of all “data” on the volume.

-Future snaps are incremental.

-Volumes can be created (restored) from snapshots.

-Snapshots can be copied to another region.

Volume Performance

-New EBS volume = full performance immediately.

-Snaps restore lazily - fetched gradually.

-Requested blocks are fetched immediately.

-Force a read of all data immediately…

-Fast Snapshot Restore (FSR) - Inmediate restore. (It costs extra)

-.. up to 50 snaps per region - You pick the Snap & AZ.

Snapshots Consumption and Billing

-Gigabyte-month.
-Used data NOT allocated data. (Only snapshots)
-Incremental.

20
Q

EBS Encryption

A

-By default, no encryption is applied.

-Provides at rest encryption for volumes and for snapshots.

When you create an encrypted EBS volume, EBS uses KMS and a KMS Key, which can either be: EBS Default AWS Managed Key (aws/ebs) or it can a Custom Managed KMS Key (that you create and manage)

That key is used by EBS, when an encrypted volume is created. This key is used to generate an encrypted, Data Encryption Key (DEK), this occurs when the “GenerateDataKey WithoutPlainText” API call. This is stored with the volume on the raw storage, and it can only be decrypted using the KMS key that created the DEK. (Assuming it has permissions)

When the volume it’s first used, either mounted on an EC2 instance by you or when an instance is launched, then EBS asks KMS, to decrypt the DEK, that’s used just for this one volume. That key is loaded into the memory of the EC2 Host, which will be using it.

-The key is only ever held, in this decrypted form in memory in the EC2 Hosts, when it’s using the volume.

So the key is by the Host, to encrypt and decrypt data , between an instance and the EBS volume, especifically, the raw storage that the EBS volume, is stored on.

-Means that data stored onto the raw storage, used by the volume, is Ciphertext. (Encrypted at rest)

-Data only exist in an unencrypted form, inside the memory of the EC2 Host.

-When the EC2 instance moves from this EC2 Host to another, the decrypted key, is DISCARDED. Leaving only the encrypted version at the disk, for that instance to use the volume again, the encrypted DEK, needs to be decrypted again and loaded into another EC2 Host.

-If a Snapshot is made, of an encrypted volume, means the Snapshot is also encrypted. (Using the same DEK)

21
Q

EXAM POWER UP - EBS Encryption

A

-Accounts can be set to encrypt by default - using the default KMS key

-Otherwise choose a KMS key to use.

-Each volume uses 1 UNIQUE DEK.

-Snapshots & future volumes use the same DEK.

-Can’t change a volume to NOT be encrypted.

-OS isn’t aware of the encryption - no performance loss

-It doesn’t cost anything.

22
Q

EC2 Network & DNS Architecture

A

-Every EC2 instance has a at least one, Elastic Network Interface (ENI) - Primary ENI
–You can attach more ENIs, it can be in different subnets but within same AZ.

ENIs, have:

-a MAC Address (Hardware address of the interface)
-a Primery IPv4 Private IP (from the range of the subnet) > 10.16.0.10 > ip-10-16-0-10.ec2.internal
-It can have 0 or more secondary IPs > 3.89.7.136 > ec2-3-89-7-136.compute-1.amazonaws.com

–Public DNS = private IP in VPC - Public IP = everywhere else

-It can have 0 or 1 Public IPv4 Address
-1 elastic IP per private IPv4 addresss

Elastic IP addresses are different from normal Public IP addresses, where it’s one per interface. With Elastic IP addresses, you can have one Public elastic address per Private IP address on this interface.

If an EC2 instance has a non-elastic IPv4 address, and you assign an Elastic IP to the Primary ENI, it will REMOVE the Public IPv4. ** (You can’t get that address back)

-You can have 0 or more IPv6 addresses per interface (By default, publicly routable)
-You can have Security Groups (applied to ENIs not EC2’s)
-You can enable/disable “Source/Destination Check” - If traffic is on the interface, it’s going to be discarded, if it’s not from one of the IP addresses on the interfaces source, or destinated to one of the IP addresses on the interfaces destination. (It’s discarded if it doesn’t match one of those conditions)

23
Q

EXAM POWER UP - EC2 Network & DNS Architecture

A

-Secondary ENI + MAC = Licensing

-Multiple interfaces, can be used for multi-homed (subnets) systems - So an instance with ENI in 2 different subnets (Management & Data)

-Different Security Groups - multiple interfaces.

-OS - NEVER SEES THE PUBLIC IPv4 ADDRESS.

-IPv4 Public IPs are Dynamic - Stop & Start = Change (Restarting is fine) (to avoid this use Elastic IP)

-Public DNS = private IP in VPC - Public IP = everywhere else.

24
Q

Amazon Machine Images (AMI)

A

AMIs are the images of EC2. They’re one way that you can create a template of an instance configuration, and then use that template to create many instances from that configuration.

-AMIs can be used to launch an EC2 instance

-AWS or Community provided

-Also launch instances from Marketplace (can include commercial software)

-Regional

-Each AMI has a unique ID - e.g. ami-0a887e401f7654935 (AMIs can only be used in the Region that’s in)

-AMIs control permissions (Public, Your Account, Specific Accounts)- By default, only your account can use it - You can set your AMI to be Public, or you can add specific AWS accounts onto that AMI

-You can create an AMI from an EC2 instance you want to template, and viceversa

25
Q

AMI Lifecycle

A
  1. Launch = You use an AMI to launch an EC2 instance
  2. Configure = You can take the instance, that you provisioned during the “Launch” phase, and then you can apply some configuration, to bring your instance into a state where it’s perfectly set up for your organization.
  3. Create Image = You can take that configured instance, and actually create your own AMI.

This AMI will contain a few things:

When you create an AMI for any EBS volumes, which are attached to that EC2 instance, we have EBS Snapshots created from those volumes (Incremental but first one is a full copy, of all the data that’s used on that EBS volume).

When you make an AMI, the EBS Snapshots are taken, and those Snapshots are actually referenced inside the AMI, using a “Block device mapping”.

-Block Device Mapping = Is just a table of data, it links the Snapshots ID, and it has, for each one of those Snapshots, a device ID that the original volumes had on the EC2 instance.

BDM will contain the ID of the right Snapshot, and the Block Device of the original volume (/dev/xvda)

  1. Launch = When this AMI is used to create a new instance, this instance will have the same EBS volume configuration as the original.

When you launch an instance using an AMI, what actually happens is the Snapshots are used to create new EBS volumes in the AZ, that you’re launching that instance into, and those volumes are attached to that new instance, using the same Device ID.

26
Q

EXAM POWER UP - AMIs

A

-AMI = One Region, only works in that one region.

-AMI Baking - creating an AMI from a configured instance + application

-An AMI CAN’T BE EDITED - launch instance, update configuration and make a new AMI

-AMIs can be copied between regions (includes its Snapshots)

-Permissions - default = your account

-AMI costs, are the storage capacity used, by the EBS Snapshots that the AMI references.

27
Q

EC2 Purchase Options (Launch Types) - On-Demand

A

Instances of different sizes run on the same EC2 hosts - consuming a defined allocation of resources

-On-demand instances are isolated but multiple customer instances run on shared hardware

-Per-second billing while instance is running. Associated resources such as storage consume capacity, so bill, regardless of instance state.

-Default purchase option - No interruption, No capacity reservation, Predictable pricing, no upfront cost, no discount.

-Short term workloads - Unknown Workloads - Apps which can’t be interrupted

28
Q

EC2 Purchase Options (Launch Types) - Spot

A

-SPOT pricing is AWS selling UNUSED EC2 host capacity for up for UP TO 90% discount - the spot price is based on the SPARE CAPACITY AT A GIVEN TIME.

-NOT RELIABLE (ONLY short-term)

-NEVER USE SPOT for workloads which CAN’T TOLERATE INTERRUPTIONS

-Non-time critical - Anything which can be rerun, Bursty Capacity needs, Cost sensitive workloads, Anything which is stateless

29
Q

EC2 Purchase Options (Launch Types) - Reserved

A

Reservations are a way you commit to AWS, that you will use resourcesfor a length of time. They form a part of most larger deployments within AWS.

-LONG-TERM CONSISTENT usage of EC2

-Matching instance, reduced or no p/sec price - As long as the reservation matches the instance, it will apply to that instance

-Unused reservation STILL BILLED

-Partial coverage of larger instance - if you provision a larger instance than the reservated capacity, it will have a partial effect, you would get a discount of a partial component of that larger instance

-You can commit to 1 year (+) or 3 year terms (++) - You pay for the entire term

-No-upfront - offers the least discount

-Partial-Upfront - When you pay a smaller lump sum in advance, in exchance for reduced p/sec fee

-All-upfront - No p/sec fee and Greatest discount (with 3y term)

30
Q

Reserved Instances - Scheduled Reserved Instances

A

-Is a commitment, you specify the frequency, the duration and the time

-Ideal for long-term usage which doesn’t run constantly - Batch Processing daily for 5 hours starting at 23:00

-You reserve the capacity for a slightly cheaper rate versus on-demand, but you can only use that capacity during that time window.

-Weekly data, sales analysis, every Friday for 24 hours or you might have a larger analysis process which needs 100 hours of EC2 capacity per month

-It doesn’t support all instance, types or Regions

-You need to purchase a minimum of 1200 hours per year - 1 year minimum

31
Q

Reserved Instances - Capacity Reservations

A

AWS also offers discounted hourly rates in exchange for an upfront fee and term contract. Services such as Amazon EC2 and Amazon RDS use this approach to sell reserved capacity for hourly use of Reserved Instances.

When you reserve capacity with Reserved Instances, your hourly usage is calculated at a discounted rate for instances of the same usage type in the same Availability Zone (AZ). When you launch additional instances of the same instance type in the same Availability Zone and exceed the number of instances in your reservation, AWS averages the rates of the Reserved Instances and the On-Demand Instances to give you a blended rate.

-Regional Reservation provides a billing discount for validinstances launched in any AZ in that region

While flexible they DON’T RESERVE CAPACITY within an AZ - which is risky during major faults when capacity can be limited. When you are launching instances, even having Regional Reservation, you’re launching them with the same priority as on-demand instances.

-Zonal Reservation only apply to one AZ, providing billing discounts and capacity reservation in that AZ.

If you launch instances into another availability zone in that Region, you get neither benefit (Full Price & No Capacity Reservation)

-On-Demand capacity reservations can be booked to ensure you always have access to capacity in an AZ when you need it - but at full on-demand price. No term limits- but you pay regardless of if you consume it.

Capacity Reservations don’t have the same 1 or 3 year commitment requirements that you need for Reserved Instances. You’re not getting any billing benefit, when using Capacity Reservations, you’re just reserving the capacity.

So at any point you can book a Capacity Reservations. If you know you need some EC2 capacity without worrying about the 1 or 3 year term commitments, but you don’t benefit from any cost reduction.

32
Q

EC2 Purchase Options (Launch Types) - Dedicated Hosts

A

Is an EC2 host, which is allocated to you IN ITS ENTIRETY.

-You pay for the host itself, which is designed for a specific family of instances (No instance charges)

You can launch various different sizes of instances on the host, consuming all the way up to the complete resource capacity of that host. You need to manage this capacity, if the Dedicated Host run out of capacity, then you can’t launch any additional instances.

-These hosts come with all the resources that you would expect from a physical machine. (number of cores, memory, local storage and network connectivity)

-You would use this option, if you have Licensing based on Sockets/Cores **

Licensing it based on the amount of resources in a physical machine, not the resources that are allocated to a virtual machine or an instance within AWS

-Have a feature, called Host Afinity which links instances to certain EC2 hosts

So if you stop and start the instance, it remains on the same host.

-Only your instances will run on Dedicated Hosts

33
Q

EC2 Purchase Options (Launch Types) - Dedicated Instances

A

Your instances run with others instances of yours and no other customers use the same hardware

-You don’t own, or share the host

-Charges for instances but dedicated hardware

-You pay a one-off hourly fee, for any Regions where you’re using Dedicated Instances.

-Fee for using Dedicated Instances themselves

-Are common on sectors of the industry, where you have really strict requirements, which mean that you can’t share infrastructure

34
Q

EC2 Savings Plan

A

Instead of focusing on a particular type of instance, in an AZ or Region.

-You are making a Hourly commitment for a 1 or 3 year term.
-Products have an On-demand rate and a Savings Plan rate.
-Resources usage consumes Savings Plan commitment at the reduced Savings Plan rate.
-Beyond your commitment… On-demand is used.

Saving Plans come in two types:

-Compute Savings Plan

–Make an hourly spend commitment for 1 or 3 year term.
–Automatically and simultaneously apply to any eligible Amazon EC2, Fargate, and Lambda usage across all supported AWS Regions up to the hourly commitment.
–Save up to 66% compared to On-demand pricing.

-EC2 Instance Savings Plans

–Make an hourly spend commitment to an instance family and Region for 1 or 3 year term
–Any instance size - Any AZ - Any O.S - Any tenancy
–Automatically and simultaneously apply to eligible Amazon EC2 usage up to the hourly commitment
–Save up to 72% compared to On-demand pricing.

35
Q

Instance Status Checks and AutoRecovery

A

Every instance within EC2 has two high level per instance status checks.

Each of the two checks represents a separate set of tests, and so a failure of either of them suggests a different set of underlying problems.

1st - System Status.

A failure of this check, could indicate one of few major problems:

-Loss of system power
-Loss of network connectivity
-Host software issues
-Host hardware issues

This check is focused on issues impacting the EC2 service or the EC2 Host.

2nd - Instance Status.

A failure of this check, could indicate one of few major problems:

-Corrupted file system
-Incorrect Instance Networking
-OS Kernel Issues

EC2 comes with a feature allowing you to recover automatically to any system check issues. You can ask the EC2 stop a failed instance, reboot it, terminate it, or you can ask EC2 to perform Auto Recovery.

Auto Recovery moves the instance to a new host, starts it up with exactly the same configuration as before. So all IP Addressing is maintaned, and if software on the instance is set to Auto Start, this process could mean that the instance, as the name suggests, automatically recovers fully from any failed status check issues.

-Only works on instances with EBS volumes, not using Instance Store.

36
Q

Horizontal vs Vertical Scaling

A

These are two different ways that a system can scale to handle increasing, or in some cases descreasing load placed on that system.

Scaling is what happens when systems need to grow or shrink in responses to increases or decreases of load placed upon them by your customers. (adding or removing resources to a system)

-Vertical Scaling

A “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage.

In this case, you’re just resizing the EC2 instance when you scale.

NEGATIVES
–Each resize requires a reboot - Customer DISRUPTION - This means you can only scale generally during pre-agreed times, so within outage windows.
–Larger instances often carry a $ premium
–There is an upper cap on performance - Instance Size

POSSITIVE
–No application modification required - If an application can run on an instance then it can run on a bigger instance
–Works for ALL applications - even Monoliths

-Horizontal scaling

A “horizontally scalable” system is one that can increase capacity by adding more computers to the system.

Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.

–Sessions are everything

When you log into an application, think about Email. The state of your interaction with that application is called a “session”. With a single application running on a single server, the sessions of all customers are generally stored on that server. With Horizontal Scaling, this won’t work.

Without changes, everytime you move between instances for a horizontally scaled application, you would have a different session or no session. You would be logged out. With Horizontal Scaling, you can be shifting between instances constantly.

–Requires application support OR off-host sessions

If you use off-host sessions then your session data is stored in another place, an external database. This means that the servers are “Stateless”, they’re just dumb instances of your application.

–No disruption while scaling - Customers connections remain unaffected.

–Scaling IN = removing instances / Scaling OUT = adding instances

–No real limits to scaling - you can just keep adding instances

–Often less expensive - no large instance premium

–It can allow you to be more granular

37
Q

Instance Metadata

A

-EC2 service provides data to Instances

Is data about the instance that can be used to configure or manage a running instance. Is a way the instance or anything running inside the instance, can access information about the environment that it wouldn’t be able access otherwise.

-Accessible inside ALL instance

-To access the Metadata of the Instance = http://169.254.159.254/latest/meta-data/ **

-Environment - Allows anything on the instance to query it for information about that instance, that information is divided into categories (Host name, events, Security Groups…) all information about the environment that the instance it’s in.

-Access to Networking information

-Access to Authentication information

-It’s used by AWS to pass in temporary SSH keys for Instance Connect

-It’s used to grant access to User-Data - A way to make the instance run scriptsot perform automatic configuration steps

-NOT AUTHENTICATED or ENCRYPTED **