Server Storage Flashcards
storage technologies
- storage device dimensions/form factors have to be considered
- 3.5 in large form factor (LFF) hard drives are common
- SFF disks = 2.5 in
- storage capacity
- read/write speed
HDDs
- magnetic disk drives
- vacuum sealed
- contain multiple platters
- spinning disks
- read/write heads on actuator arm for each platter
RPMs (HDDs)
- faster disk spins
- quicker read/write times
- norm for desktop disks = 7200
- norm for laptop disks = 5400
- faster server HDDs = 15000
seek time (HDDs)
position of the read/write head over the disk platter determines time to locate data on disk
rotational latency
- disk platter must spin to correct position to read/write before data is transferred
- usually measured in fractions of a second
bus width (HDDs)
- amount of bits that can be transferred simultaneously
- fast disk transmission technologies often use serial rather than parallel transmission schemes
IOPS (HDDs)
- input/output operations per second
- how often a disk can perform I/O operations depends on the specific workload
- generally more IOPS is better
transfer rate (HDDs)
per second rate at which data is moved into/out of disks indicates the speed of data transfer
SSDs
- solid state drives
- no moving parts
- more expensive than HDDs
SATA
- serial advanced technology attachment interface
- used to connect both SSDs and HDDs
SSDs in cloud
- premium pricing when SSD is used
- opting for disk with higher IOPS value
SSHDs
- hybrid drives
- combination of hard disk/solid state
- spinning platters and faster flash memory
- cache frequently accessed data on flash memory
storage tiers
- valuable data should be quickly accessible
- storage tier policies to determine which type of data will be stored on which specific storage media
- hierarchical storage management (HSM)
- place tiered storage capabilities in front of SAN storage
SAS (disk interface)
- serial-attached SCSI
- serial bit transmission
- hot-pluggable
- newer iteration of older SCSI standard
- more expensive than SATA
- smaller storage capacity than SATA
- designed for constant use
- often used for servers
SATA (disk interface)
- serial ATA
- serial bit transmission
- not designed for constant use
- used often in personal workstations
eSATA (disk interface)
- similar to SATA
- interface connector is external to the device
- some devices have built in eSATA port
SCSI (disk interface)
- small computer system interface
- parallel bit transmission
- used often for servers
USB (disk interface)
- universal serial bus
- serial bit transmission
- convenient external connectivity
- used often in personal workstations
FC (disk interface)
- fibre channel
- used in SANs
- host bus adapters are required in server to access SAN storage
- used often for servers
serial bit transmission
sends data bits one after another over a single channel
parallel bit transmission
sends multiple data bits simultaneously over multiple channels
optical drives
- CD/Blu-ray/DVD
- write once ready many (WORM) media
- server may require optical drive to boot from for recovery purposes/install OS
cloud storage
- can provision/deprovision storage instantly
- only pay for the storage needed
- legal/regulatory restrictions on public cloud storage
- on-premise storage can back up to cloud
DAS
- direct attached storage
- storage disks are housed inside the server chassis
- storage disks only available locally to that server
NAS
- network attached storage
- SMB (Windows)
- NFS (Linux)
- CIFS
- use of higher layer protocols distinguishes NAS from SANs
- servers connect to NAS storage over standard network equipment/using standard protocols
- SANs = highly specialized high-speed networks designed to transmit disk I/O traffic using protocols designed for this use
- NAS can be hardware appliances/servers configured for this purpose
CIFS
- common internet file system
- specific implementation of SMB
iSCI
- internet small computer system interface
- makes storage accessible to hosts over standard TCP/IP network on a small scale
- less expensive
- slower
- less reliable
- requires separate network segment/VLAN
iSCSI initiators
- can be implemented as hardware/software
- hardware initiators support enhanced options i.e. server booting the OS over network
- initiator needs network address/port to contact target
- specify logical unit number (LUN) using iSCSI qualified name (IQN) after connection is established
iSCSI
- hosts disk space on IP network
- disk spaced consumed by servers
- allocated blocks of disk space = LUNs
- iSCSI LUNs can also be consumed by some client OS
FCoE
- fibre channel over ethernet
- places disk commands into ethernet frames
- requires converged network adapters (CNAs)
- requires FCoE switches
- requires copper/fiber optic cables
FC host bust adapter (HBA)
- enables VMs to communicate with SAN
- installed on hypervisor host
- unique 16-digit hexadecimal identifier called world wide node name (WWNN)
- can have multiple ports
- each port can connect to different FC switches for redundancy
SANs
- separate storage from individual hosts
- hosts connect to storage over network
- storage appears to be a local device to host
- use specialized network equipment/network storage protocols i.e. FC
fabric
- FC switch with a single WWNN plus a WWPN for each port
- storage arrays connected to FC switches
LUNs
- administrators configure LUNs and LUN masks to determine which servers can use which configured storage
- LUN uniquely identifies disk space on the storage array
- LUN mask is usually configured at the HBA level (prevent Windows server from seeing LUNs used by Linux servers)
zoning
- larger SANs use instead of LUN masking
- configured at FC switch level
- doesn’t apply to FCoE or iSCSI
- groups nodes into zones
- enables controlling LUN visibility to all nodes in the same zone
- use separate VLANs to achieve this with FCoE or iSCSI
necessary features of cloud storage
- pool of resources shared by multiple tenants
- IT services available on demand from anywhere using any device
- rapid elasticity
- user provisioning/deprovisioning
- metered services
VSS
- volume shadow service
- volume shadow copy service
- enables data backup without requiring applications to be taken offline during backup
disk quotas
- limit how much disk space is used in a folder/by user
- soft quotas don’t enforce quota but create log entry
- hard quotas are enforced
thin provisioning
- overbooking/overcommitted disk space
- admin adds storage to server
- multiple disk volumes are created and thinly provisioned to include all storage space added to server
- volumes will use disk space as they grow
- don’t have to know storage needs in advance
- limited to total disk space physically available
compression
- compression tools save space by reducing redundant occurrences of data
- Windows/Linux servers use GUI/command line to work with compression
- Windows compact command
- Linux gzip command
data deduplication
- remove redundant data blocks to conserve space
- Windows server includes data deduplication for NTFS volumes
- tools to measure current disk space usage
- Microsoft file server resource manager (FSRM)
Windows image files
- .WIM standard file type
- save storage space by storing multiple images of the same OS within a single file (single-instance storage)
DISM
- deployment image servicing and management
- DISM.exe
- tool to work with Windows image files
image management tools
- DISM
- imagex.exe
- Microsoft development toolkit (MDT)
- Microsoft system center configuration manager (SCCM)
RAID configuration
- redundant array of independent disks
- enables grouping multiple physical disks together as a logical unit
- improved I/O
- fault tolerance
- hardware RAID support usually integrated on server motherboards
- can get expansion cards (RAID controllers)
- software RAID is built into server OS
dynamic disks
- required for using software RAID in Windows
- disks start as basic disks
- prompted to convert to dynamic disks when configuring software RAID levels
hardware RAID array controllers
- often have battery-backed caches
- cached data is committed to disk after system crashed and is rebooted
- use redundant RAID controllers
RAID 0
- uses disk striping
- requires at least 2 disks
- data to be written to disk is broken into blocks (stripes) that are evenly written across the disk array
- improves disk I/O performance
- offers no fault tolerance
RAID 1
- uses disk mirroring
- requires at least 2 disks
- data written to disk partition on 1 disk is also written to a disk partition on a different disk
- can use only 50% of disk space
- tolerates disk failure
- doesn’t replace backups
RAID 5
- uses disk striping with distributed parity
- requires at least 3 disks
- data is striped and evenly written across the disk array
- stores parity (error recovery) information for each stripe on a separate disk from its related data stripe
- tolerates single disk failure
- can reconstruct in memory/on demand any data from failed disk
RAID 6
- uses double parity RAID
- requires at least 4 disks
- data is striped and distributed evenly across the disk array
- stores 2 parity (error recovery) strings on each disk
- never stores parity and its related data on same disk
- tolerates 2 disk failures
- can reconstruct in memory/on demand any data from failed disks
RAID 10
- uses RAID level 1 then 0
- uses disk mirroring followed by striping
- provides fault tolerance and performance
- requires at least 4 disks
- stripes data across mirrored pairs
- tolerates multiple disk failures and long as they are not in the same mirrored pair
- useful for busy databases
making storage space usable
- initialize disks
- partition disks
- formatting partitions with a particular file system
disk initialization
- master boot record (MBR)
- GUID partition table (GPT)
disk management tools for Windows
- diskpart.exe
- GUI disk management
- server manager
disk management tools for Linux
- fdisk command (MBR)
- gdisk command (GPT)
- logical volume management (LVM)
LVM
- logical volume management
- used to group physical disks together upon which logical volumes can be created
file systems supported by Windows server
- file allocation table (FAT)
- FAT32
- extended FAT (exFAT)
- new technology file system (NTFS)
- resilient file system (ReFS)
NTFS
- supersedes FAT/FAT32
- journaled file system
- supports compression/encryption/file system security/larger file and partition sizes/user disk quotas
FAT32/exFAT
- most commonly used with removable storage
- can format flash drives as NTFS
ReFS
- newer system
- designed to be more resilient to file system corruption
- ability to scan for/correct file system corruption while disk volume is mounted and in use
- doesn’t support encrypting file system (EFS)
- doesn’t support data deduplication
- can’t be used on an OS boot drive
file systems supported by UNIX/Linux
- UNIX file system (UFS)
- zettabyte file system (ZFS)
- extended file system (EXT2/EXT3/EXT4)
- reiserFS
EXT4/ReiserFS
- common in modern Linux environments
- journaled file systems
journaled file systems
- all file system write transactions are logged before being committed to disk
- makes file system less susceptible to corruption
VMFS
- virtual machine file system
- specific to VMware
- designed to support simultaneous read/write activity by cluster nodes concerning VM hard disk files/snapshots
- enables live migration of VMs between VMware ESXi hosts with zero downtime
CSVs
- cluster shared volumes
- supported by Microsoft failover clustering
- enables live migration of Hyper-V VMs between clustered Hyper-V hosts with zero downtime
hot swappable disks
failed disks can be replaced while everything stays running
hot spares
extra disks plugged in but not currently in use
cold spares
- extra disks that can be swapped out when used disks fail
- requires that system is shut down
most common disk interface in servers
SAS
tier 2 storage
HDD
VMFS benefit over NTFS
enable multiple cluster nodes to read/write to the same file system simultaneously