5. Implement High Availability Flashcards

1
Q

RPO

A

Recovery Point Objective

The amount of data that must be restored in the event of a failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

RTO

A

Recovery Time Objective

The length of time an application can be unavailable before service must be restored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

MTBF

A

Mean Time Between Failure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

MTTR

A

Mean Time To Recover

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Failover Clustering

A

Used for applications and services such as SQL and Exchange

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Network Load Balancing

A

Used for network-based services such as Web, FTP and RDP servers. Allows configuring two or more servers as a single virtual cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

NLB Unicast Mode

A
  • Cluster adapters for all nodes are assigned the same MAC address.
  • Can cause subnet flooding since all packets are sent to all ports on the switch.
  • Communication between nodes is not possible.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

NLB Multicast Mode

A
  • Cluster adapters for all nodes get their own MAC address
  • Nodes are assigned a multicast MAC address (from IP of cluster)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

NLB IGMP Multicast

A

Similar to multicast, but prevents switch flooding because MAC traffic only goes to ports of NLB cluster

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

IGMP

A

Internet Group Management Protocol

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

NLB Stop

A

Cluster stops immediately, all active connections are killed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

NLB Drainstop

A

Cluster stops after answering all current connections, no new connections are accepted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Hyper-V Replica

A

Replicates VMs from primary site to secondary site simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Extended (Chained) Replication

A

Host 1 > Host 2 > Host 3 Does not support application-consistent replication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

NLB Affinity Types

A
  • None
  • Single
  • Class C
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

NLB Affinity: None

A

NLB does not assign clients to a node, all requests can go to any node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

NLB Affinity: Single

A

Single Affinity allows a client to be assigned to a single node. Best intranet performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

NLB Affinity: Class C

A

NLB links clients with a specific node based on the Class C part of the client’s IP address. Best internet performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

NLB Cluster Requirements

A
  • Adapter can only support TCP/IP
  • Servers in cluster must have static IP’s
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Test Failover

A

Verifies a replica can start in the secondary site

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Planned Failover

A

Used during planned downtime. Primary VM is powered off and replica is powered on and syncs changes to primary. Normal primary is restored after failover is complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Unplanned Failover

A

Only initiate if primary machine is offline.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

DHCP Guard

A

Drops DHCP server messages from unauthorized VMs pretending to be a DHCP server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Router Guard

A

Router Guard drops advertisement and redirection packets from unauthorized VMs pretending to be routers. Similar to DHCP Guard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Protected Network

A

Virtual machine will be moved to another cluster node if a network disconnection is detected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Port Mirroring

A

Allows VM network traffic to be monitored by copying packets and forwarding to another VM for monitoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

NIC Teaming

A

Place NICs in a team in the guest operating system to aggregate bandwidth and provide redundancy. Useful if teaming is not configured in management OS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Device Naming

A

Causes the name of the network adapter to be propagated into supported guest OSes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

VM Checkpoints

A

System state backup of VM from specific point in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Software Load Balancing

A

Allows having multiple servers hosting same virtual networking workload in a multitenant environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Hyper-V Live Migration

A

Transfers running VM from one host to another with no downtime.

32
Q

Hyper-V Quick Migration

A

Requires pausing VM, saving VM, moving VM and starting again

33
Q

Hyper-V Move VM

A

Power off VM and copy to another host and then power on.

34
Q

CredSSP for Live Migration

A

Requires constrained delegation and ability to sign into the Source server

35
Q

Kerberos for Live Migration

A

Allows avoiding having to sign into Source server but requires constrained delegation to be set up

36
Q

Hyper-V Live Migration Performance: TCP/IP

A

Memory of VM is copied over the network to destination over TCP/IP

37
Q

Hyper-V Live Migration Performance: Compression

A

Memory is compressed and then copied to destination over TCP/IP

38
Q

Hyper-V Live Migration Performance: SMB

A

Memory is copied to destination over SMB connection. SMB Direct is used if NICs at source and destination have Remote Direct Memory Access enabled

39
Q

Hyper-V Live Migration Requirements

A
  • Administrator account (Local/Domain)
  • Hyper-V Role, version 5 VM
  • Same domain
  • Management tools installed
40
Q

Hyper-V Shared Nothing Live Migration

A

Migration between hosts not in a cluster. Requires Kerberos constrained delegation configuration on each server.

41
Q

Hyper-V Storage Migration

A

Allows migrating a running VM from one storage device to another without downtime

42
Q

Storage Migration Requirements

A

Only virtual hard disks configured for storage

43
Q

NLB Hardware Requirements

A
  • All hosts on same subnet
  • No limit on NICs
  • All NICs must be multicast or unicast
  • If using unicast mode, NIC handling client-to-cluster traffic must support MAC addressing
44
Q

What is a failover cluster?

A

All of the clustered application or service resources are assigned to one node/server in the cluster. If the node/server goes offline, another node/server spins up the resource and all traffic to the cluster is automatically sent to the new live node.

Examples of commonly clustered applications: SQL and Exchange

Examples of commonly clustered services: Hyper-V

45
Q

In what editions of Windows Server 2016 is failover clustering available?

A
  • Datacenter
  • Standard
  • Hyper-V
46
Q

Failover Clustering Server Requirements

A
  • All server hardware must be Server 2016 certified
  • All of the “Validate a Configuration Wizard” tests must pass
47
Q

Storage Failover Clustering Requirements

A
  • Disks available must be Fibre Channel, iSCSI or Serial Attached SCSI
  • Each node must have a dedicated NIC for iSCSI connectivity
  • Multipath software must be based on Microsoft’s Multi-path I/O (MPIO)
  • Storage drivers must be based on ` storport.sys`
  • Drivers and firmware for storage controllers of each node should be identical
  • All storage hardware should be certified for Server 2016
48
Q

Failover Clustering Network Requirements

A
  • Nodes should be connected to multiple networks for redundancy
  • NICs should be same make, same drivers & firmware
  • Network components should be certified for Server 2016
49
Q

Failover Cluster Network Connections

A
  • Public - client to cluster
  • Private - node to node
50
Q

Cluster Domain Scenarios

A
  • Single-domain clusters
  • Multi-domain clusters
  • Workgroup clusters
51
Q

Site-Aware Clustering

A

Also known as stretch or geoclustering, is the practice of having clusters span across geographic locations. Server 2016 clusters can be configured to be site-aware, allowing administrators to set up and controll cross-site heartbeats for optimal configuration.

52
Q

Cluster Quorum

A

Consensus of the status of each of the nodes in the cluster. Quorum must be achieved in order for the cluster to come online by getting a majority of the available votes. Best practice is to have total available quorum votes be an odd number.

53
Q

The Four Quorum Modes

A
  1. Node majority (no witness)
  2. Node majority with witness (disk or fileshare)
  3. Node and file share majority
  4. No majority (disk witness only)
54
Q
A
55
Q

Disk Witness

A
  • 512 MB minimum
  • Dedicated to cluster
  • Must pass validation tests for storage
  • NTFS or ReFS formatting

Setup if all nodes can see the disk and are using shared storage.

56
Q

File Share Witness

A
  • 5 MB free space minimum
  • File share must be dedicated to the cluster

Setup for multi-site disaster recovery and the file server must be using the SMB file share.

57
Q

Cloud Witness

A

New to Server 2016, cloud witness leverages the use of Microsoft Azure to have an “always-on in any location” quorum vote.

58
Q

Dynamic Quorum Management

A

New to Server 2016, quorum votes for nodes are automatically added/removed as nodes are added/removed from the cluster. This is enabled by default.

59
Q

“Validate a Configuration Wizard”

A

Runs four types of tests:

  • Software and hardware inventory
  • Network tests
  • Storage tests (brings cluster offline)
  • System configuration tests

Validates whether or not a cluster is supported by Microsoft. Report is stored in %windir%\Cluster\Reports

60
Q

Actions on Nodes in a Cluster

A
  • Pause node - prevents failing over to the node, useful for maintenenance/troubleshooting
  • Evict node - irreversible, kicks node out of cluster. Node can be re-added from scratch, useful if node is damaged beyond repair.
61
Q

Built-in Roles and Features that can be Clustered

A
  • DFS Namespace Server
  • DHCP Server
  • Distributed Transaction Coordinator (DTC)
  • File Server
  • Generic Application
  • Generic Script
  • Hyper-V Replica Broker
  • iSCSI Target Server
  • iSNS Server
  • Message Queueing
  • Virtual Machine
62
Q

Cluster Failover Process

A
  1. Cluster service takes all of the resources in the role offline to set dependancy hierarchy
  2. Cluster service transfers the role to the node that is listed next on the application’s list of preferred host nodes
  3. Cluster service attempts to bring all of the role’s resources online, starting at the bottom of the dependancy hierarchy

Steps assume live migration is not being used

63
Q

Cluster Failback Settings

A

Determine when, if ever, a role/application should fail back to the primary cluster node when it becomes available. Default behavior is “Prevent Failback” but it can be scheduled and set by the administrator.

64
Q

Cluster Dependancy Viewer

A

Gives visual report of how the roles/services for the clustered resource are dependent on other roles/services in the hierarchy.

65
Q

What are Resources in a Cluster?

A

The smallest configurable part in a cluster. Include physical or logical objects such as disks, IP addresses, file share, etc. Can configure resource policies to determine how resources respond when a failure occurs and how resources are monitored for failures.

66
Q

Resource Policy Options

A
  • If Resource Fails, Do Not Restart
  • If Resource Fails, Attempt Restart on Current Node
  • If Restart Is Unsuccessful, Fail Over All Resources In This Service or Application
  • If All The Restart Attempts Fail, Begin Restarting Again After The Specified Period (hh:mm)
67
Q

Cluster Shared Volumes

A

Cluster Shared Volumes (CSV) enable multiple nodes in a failover cluster to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume.

68
Q

Cluster-Aware Updating

A

Allows system updates to be applied automatically while the cluster remains available during the entire update process.

69
Q

Node Fairness (Virtual Machine Load Balancing)

A

Prevents any one host/node from being overloaded with too many running VMs. It will automatically redistribute VMs to different hosts/nodes to the desired balance settings.

70
Q

Scale-Out File Server for Application Data

A

Utilizing Storage Spaces you can create a Scale-Out File Server with highly available clustered disks which are useful for Hyper-V VM disk storage as well as SQL Server database file storage.

71
Q

VM Drain on Node Shutdown

A

Windows Server will automatically attempt to live migrate VMs on a cluster node to another node during a reboot/shutdown.

72
Q

Global Update Manager Mode

A

The Global Update Manager is a component of the cluster that ensures that before a change is marked as being committed for the entire cluster, all nodes have received and committed that change to their local cluster database. The GUM is only as fast as the slowest node in the cluster.

New to Server 2016 is Global Update Manager mode which allows you to configure the GUM read-write modes manually to speed up the processing of changes by the GUM.

73
Q

Hyper-V Replica Broker

A

Allows for VMs in a cluster to be replicated. The Hyper-V Replica Broker keeps track of which nodes VMs reside on and ensure replication is maintained.

74
Q

Storage Spaces Direct

A

Storage Spaces Direct uses locally attached drives on servers to create highly available storage. It is conceptually similar to RAID but done at the software level (Windows). Disks on one node are available for use by the whole cluster and parity is maintained on each node in the cluster for highly available storage.

Multiple physical disks together –> Storage Pool

Storage Spaces = Virtual Disks created from Storage Pools

75
Q

Storage Spaces Direct Hardware Requirements

A
  • 2-16 servers with locally attached SATA, SAS or NVMe drives
  • Must have at least two SSDs on each server and at least four additional drives (2 SSD + 4 HDD)
76
Q

Software Storage Bus

A

New Storage Spaces Direct feature which allows all of the servers to see all of each other’s local drives by spanning the cluster and establishing a software-defined storage structure.

77
Q
A