Of HA, DRS and LUNs in High Density Environments


I’ve been doing a lot of work these days sizing really high density VMware installation (>800 VMs).  At this scale, the questions common to sizing, scaling and capacity planning start to cross over from a relatively simple exercise into a deeply complex one and the penalty for getting things wrong increases exponentially.  One of the most critical aspects of planning for this type of density is the interaction between the ESX hosts and the storage tier.  For the purposes of this entry, let’s assume that compute subscription ratios are kept broadly generic at 4 vCPUs per pCPU core, 1 vCPU per VM and RAM/VMs so the compute tier calculations are kept simple.

For the storage sizing exercise, a number of VMware limits come into plan when sizing for maximums.  In particular, certain physical storage limits become critical:

  • LUN per host limit of 256
  • LUN size limit of 2TB-512B
  • VM per LUN limit of 256
  • SCSI bus contention limits (difficulty in planning for this is inversely proportional to the level of insight into the workload)

When planning a virtualization topology, realities of vCenter, HA/DRS and VMotion must also be taken into consideration:

  • 32 node absolute limit for HA/DRS (reality of migration windows and bus contention make this much lower in practice)
  • VMotion requirement of all hosts having access to all LUNs

The core density of modern processors allows for very high VM density per blade even given conservative ratios.  Consider a 2 way Nehalem-EX blade provides 16 cores.  Even at a 4:1 ratio, with single vCPU VMs, this is leads to a 64 VM ESX host.  With blades offering up to 256GB of RAM, and the efficiency of the Nehalem core (and high clock rates), this is a very realistic scenario.

Considering 64 VMs per host, it becomes quickly obvious why a cautious approach to storage sizing is required; particularly in a HA/DRS or VMotion environment.  The two extreme approaches immediately become unworkable:

  • 1 LUN per host – with 64 VMs, and a 2TB limit for VMFS, space constraints alone are likely to make this approach unworkable, but with this level of VM density, SCSI contention renders it moot.  While 100 VMs per LUN is theoretically possible, real world SCSI bus efficiency puts the limit close to 20 or so
  • 1 LUN per VM – to avoid contention and space issues altogether, and to simplify automated workflow orchestration development, some customers move forward with a 1:1 scheme.  With 64 guest per host, this approach becomes too limiting from an HA perspective.  Consider that each ESX host in a VMotion topology must see all LUNs.  The 256 LUN per host limit suddenly becomes very limiting.  Unless no more than 4 hosts per HA group is considered acceptable, this approach can no longer work

So what is the correct way forward?  In my opinion, these realities are simply yet another driver for the, admittedly difficult, practice of granular workload profiling.  It is critical to do a real analysis of CPU, RAM, IOPs, KB/s and space requirements across the workload mix.  This data will drive the configuration towards a “right sized” storage tier that provides a balanced mix of scaling efficiency, fault tolerance and performance.

 

Advertisement

One thought on “Of HA, DRS and LUNs in High Density Environments

  1. Pingback: Data centre space requirements to shrink

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s