The (Almost) Definitive Guide to PowerFlex Sizing & Other Matters Flashcards
What are the restrictions for software only PowerFlex?
only for existing ScaleIO customers (since most original ScaleIO deployments were SW only) or net new customers purchasing over 2 million in capacity licensing over 2 years
RPQ required
How much RAM is required per SDC?
50MB
also consumes very low host CPU
What is the SDSs responsibilities?
consuming local storage devices within server
making sure second copy of data is written to a different host before acknowledgement back to the SDCs
How much RAM does an SDS consume?
500MB-12GB per host (with MG layout)
very little CPU workload
How many vCPUs/cores are allocated for an SDS?
8-12
What protection level do protection domains have?
only one simultaneous node failure can occur within a PD
fault sets allow for multiple host failures but are typically not required
What should be the max number of SDSs to put in a PD?
30 SDSs per protection domain
to maintain 6 9s of availability
What is the max number of devices in a storage pool?
300 devices
How is data layed out in FG and MG pools?
MG - 1MB
FG - 4KB
What are the two performance profiles on PowerFlex?
High
Compact
What is the high performance profile?
all flash nodes
What is the compact performance profile?
HDD nodes
What is the SVM?
Storage VM
a VM for ESXi environments that runs CentOS - runs the SDS and LIA components
typically maps the local storage devices and RAID controller to the host through DirectPath IO
NVMe drives should use RDM
What is the main different between high and compact performance profile?
amount of CPU resources given
compact - SVM given 2 vCPUs/cores
high - SVM given 8 vCPUs/cores
What is the performance profile recommendation for PowerFlex?
configure High for both SSD and HDD - the new default setting
version 3.5 allows now up to 12 vCPUs and 12 SDS threads for FG pools
What is the recommendation specific to performance profile and CloudLink?
when using CloudLink D@RE increasing SVM by another vCPUs is considered best practice (from 8 to 12)
What is the all-flash performance of PowerFlex?
each SSD can provide about 80,000 Read IOPS
What is the max performance of SDS?
250,000 IOPS for reads
100,000 IOPS for writes (due to second copy being required it’s slower)
What happens after you hit the IOPS limit of an SDS?
can add more SSDs for higher large block performance
will not be able to improve small block performance
What is the typical performance of an SDC?
500,000 read or write IOPS if connected to multiple SDSs
What is the bandwidth performance of PowerFlex?
for large block IO - can saturate 200GbE per node to deliver 20GB/s of bandwidth for SDCs per node (2 x 100GbE interfaces)
SDS normally max out around 10GB/s per node when using NVMe and 7-8GB/s when using SSD
What are the latency metrics of PowerFlex?
0.2-1ms w/ All Flash and very high IOPS
100-200 microseconds w/ NVMe and low queue depth IO profiles
How does PowerFlex Manager run?
runs an an ova. VM on dedicated management nodes external to PowerFlex workload nodes
can also be managed through REST API
What is a recommendation for sizing Oracle environments?
often beneficial to have compute-only nodes to extract the most value out of expensive licenses which are by the core
What is the only node type that can support GPUs?
R740xd
Why do you need to configure minimum 10% spare capacity?
to maintain protection of a single node failure
What is the rule when sizing for performance?
the more nodes the better
size with smaller drives to get more nodes
Why is bandwidth and latency better on NVMe devices rather than SSDs?
IO does not need to go through the HBA330 controller and instead goes directly to the CPU