With nested ESX humming along happily I decided it was time to get serious about simulating a typical enterprise virtualization architecture.  My goal is to be able to test pretty much anything in the VMware product line and that means SRM which means multiple datacenters, but more importantly, multiple vCenters.  With nsted, this becomes super easy.  Here is how I decided to parcel out my resources:

Image

A few things to keep in mind when setting up the hosts (this applies to all VMware designs):

  • It’s a good best practice to isolate different types of management traffic. Even though this isn’t really relevant when all of these NICs are virtual and riding on one (or a few) physical host NICs, it’s also very easy with nested since it’s just software and it’s a good habit to get into.  Be sure to setup at least two vmkernel NICs for the vSwitch where the management port groups live.  Make sure that these vNICs are configured for fault tolerance and that at least two have “Management Traffic” checked.  This will remove any warnings about Management Network Fault Tolerance.  Done right, the final visual overview of the vswitch should look like this:

Image

  • When configuring an HA/DRS cluster, vCenter will look to find two shared datastores that can be used for heartbeat exchange.  This is less easy in a nested setup where you aren’t connecting hosts to a SAN or NAS.  The best solution here is really to introduce a NAS into the mix.  In my case I created both an iSCSI target and an NFS share and configured each host for access to both.  Within a few minutes vCenter will pick up on the fact that the cluster member hosts have two datastores each in common and will clear the warning.  Below you can see both the NFS and iSCSI datastore overviews.  Note that the iSCSI target also has the physical host configured since I use a single iSCSI target globally:

ImageImage

Ultimately my multi-vCenter, multi-virtual datacenter nested implementation will follow the architecture diagram presented below.  Note that the plan is to deploy vCD instances into both vCenter implementations:

Image


This is another topic that is done thousands of times (and has actually been done in these pages as well!), but I thought with new waves of both vCenter and Windows it might be worth documenting one more time. So with that in mind I give you a visual walk through of Windows 2k12, W2k12 AD, W2k8 and vCenter 5.5 setup!  First let’s create a new virtual machine for AD.  I am creating AD and vCenter on the physical ESX host.  These are likely to be some of the only services I will run outside of the nested ESX hosts.  As per usual, from the vSphere client (web or legacy) we select Create a New Virtual Machine from the host focus and, in this case, we can stick with “Typical”:

Screenshot 2014-04-14 18.02.26

Give our new VM a name:

Screenshot 2014-04-14 18.02.39

Select a datastore:

Screenshot 2014-04-14 18.02.45

Choose the OS (latest version of vSphere provides 2k12 64bit as an option):

Screenshot 2014-04-14 18.02.53

Assign the vNIC to a VSS port group:

Screenshot 2014-04-14 18.03.02

Provide a virtual disk (40GB is fine):

Screenshot 2014-04-14 18.03.06

Go ahead and Finish, but check “edit Settings” so we can attach a virtual CD/DVD for first boot:

Screenshot 2014-04-14 18.03.12

Browse to an ISO on a datastore (in this case my NFS install share):

Screenshot 2014-04-14 18.03.23

Select the Windows 2012 ISO:

Screenshot 2014-04-14 18.03.35

 

We can now power on the VM and launch the VM remote console.  The Windows installation boot should start:

Screenshot 2014-04-14 18.04.01

Enter the old product key if you have it:

Screenshot 2014-04-14 18.04.25

Pick an OS (I went with datacenter to entitle the entire host to unlimited guests):

Screenshot 2014-04-14 18.07.27

Agree to stuff no one reads and hopefully will never be called accountable on:

Screenshot 2014-04-14 18.07.57

Go for “Custom Install” since this is a new build (I feel “Custom Install”, complete with an ominous “advanced” warning is misleading here, but in any event…):

Screenshot 2014-04-14 18.08.05

 

Select a destination volume:

Screenshot 2014-04-14 18.11.37

And go ahead and Install Now:

Screenshot 2014-04-14 18.08.45

 

Files will copy as always:
Screenshot 2014-04-14 18.11.47

And when complete, and after a reboot, we will be greeted by the “weird to see on a server and not in a good way” MetroUI login:

Screenshot 2014-04-14 18.19.17

First up let’s install the old VMware tools:

Screenshot 2014-04-14 18.20.25

Yes yes, very scary:

Screenshot 2014-04-14 18.20.07

Install prep starts:

Screenshot 2014-04-14 18.20.37

Acknowledge:

Screenshot 2014-04-14 18.20.51

I always go with “Complete” here since it can’t hurt:

Screenshot 2014-04-14 18.21.02

Fire off the Install:

Screenshot 2014-04-14 18.21.07

Files will copy:

Screenshot 2014-04-14 18.21.24

And we’re done:

Screenshot 2014-04-14 18.21.29

We now need to restart which sucks (although it doesn’t suck as much as actually trying to find how to shutdown in the MetroUI!):

Screenshot 2014-04-14 18.21.35

Once we’re back it’s time to setup the network:

Screenshot 2014-04-14 18.22.54

UI elements here pretty much unchanged since 2k8:

Screenshot 2014-04-14 18.23.04

UI elements here pretty much unchanged since Windows NT 4!:

Screenshot 2014-04-14 18.23.15

Next we give this beast a name:

Screenshot 2014-04-14 18.28.03

After a reboot to make the name stick we head right into Server Manager (this is very new compared to 2k8) in order to manage our roles:

Screenshot 2014-04-14 18.24.00

Acknowledge that, yes, this is all very amazing:

Screenshot 2014-04-14 18.24.09

We are planning to do a role based install:

 

Screenshot 2014-04-14 18.24.18

Select our server:

Screenshot 2014-04-14 18.24.41

Choose our roles.  In my case I am doing AD so I select Active Directory Domain Services and DNS.  I leave File Services checked since that can be useful as well:

Screenshot 2014-04-14 18.24.54

Accept the pre-determined minimum required feature set (I don’t add any additional):

Screenshot 2014-04-14 18.29.18

Read some interesting fun facts about AD:

Screenshot 2014-04-14 18.30.32

And DNS…

Screenshot 2014-04-14 18.30.38

Confirm our task list:

Screenshot 2014-04-14 18.30.43

Install begins:

Screenshot 2014-04-14 18.30.49

Pretty good verbosity on progress updates in the new server manager:

Screenshot 2014-04-14 18.32.56

Configure AD.  I am creating a new forest so select “Add a new forest”:

Screenshot 2014-04-14 18.33.16

And give it a name:

Screenshot 2014-04-14 18.35.11

Provide functional level for the forest and domain. This is a net new install and I don’t plan on introducing any legacy domain controllers, so 2k12R2 native is fine (although it is interesting that R2 is called out as a functional level).  I make every AD DC a DNS server and a GC also so these are checked.  Last step is to provide a DS recovery password:

Screenshot 2014-04-14 18.35.32

Next we set DNS options (of which there are none):

Screenshot 2014-04-14 18.35.55

Provide the NetBIOS name (amazing… NetBIOS may never fully die.  Viva la NetBIOS!):

Screenshot 2014-04-14 18.36.13

Accept the default paths (or don’t, your choice):

Screenshot 2014-04-14 18.36.25

Sign off on the actions to be performed:

Screenshot 2014-04-14 18.36.30

Notice that “View Script” button? Now this is absolutely awesome if you ask me.  Like it or not, “operations” is evolving into “devops”.  This “push button get script” option here is gold for any traditional infrastructure administrator interested in self preservation.  It provides an opportunity to see what everything that is about to happen would look like if we were to be doing it programmatically in PowerShell.  I cannot say enough how much I love this feature.  And look how simple this script is!  It might actually be easier to write that script than to click through the GUI:

Screenshot 2014-04-14 18.36.40

With all of the pre-work done we can go ahead and fire off the Install:

Screenshot 2014-04-14 18.37.25

 

With that our AD domain is finished an online! Of course we don’t really need it thanks to vCenter SSO, but it certainly can’t hurt!  Next up let’s install the actual vCenter.  Now as far as I know, there are some compatibility issues with vCenter 5.5 and Windows Server 2012R2.  I’d rather not take any risks, or run into any weirdness.  I’d also prefer to not fragment my Windows 2012 footprint, so instead of doing R1 I go ahead and just deploy what I am sure works – Windows Server 2008 R2.  This is a good example of what enterprises deal with as I am now, even in my small home lab, dealing with 3 discrete Windows images (including my Windows 8 Pro admin console).  First thing is to create another VM, just as per the instructions above, but in this case setting the guest OS to Windows Server 2008R2 64bit and pointing the virtual CD/DVD at the W2k8 ISO.  On first power up, Windows 2008 installation should boot:

Screenshot 2014-04-14 18.53.09

Setup starts… Nothing new here:

Screenshot 2014-04-14 18.53.27

More license terms:

Screenshot 2014-04-14 18.53.39

Once again, as with 2k12, the Custom (advanced) option is for new installs:

Screenshot 2014-04-14 18.53.47

After the files copy Windows will do final configuration:

Screenshot 2014-04-14 19.01.03

And we’re done:

Screenshot 2014-04-14 19.01.29

The decidedly less slick but nearly equally function 2k8 Server Manager greets us:

Screenshot 2014-04-14 19.02.06

First step is to setup our network:

Screenshot 2014-04-14 19.02.36

And give her a name:\ Screenshot 2014-04-14 19.03.51

Next we join our shiny new Windows AD domain:

Screenshot 2014-04-14 19.13.43

Provide the creds with sufficient privilege to join a PC:

Screenshot 2014-04-14 19.14.03

And we’re in!

Screenshot 2014-04-14 19.14.24

After returning from the reboot it’s time to activate:

Screenshot 2014-04-14 19.16.08

This should work with no issues, but if the key was used previously activation is just a (fully automated) phone call away:

Screenshot 2014-04-14 19.18.31

Hurray we’re genuine!

Screenshot 2014-04-14 19.19.06

Next up is the tools install once again:

Screenshot 2014-04-14 19.19.55

Restart to complete:

Screenshot 2014-04-14 19.28.28

When we return it is time to setup vSphere 5.5.  Pop in the VIMSetup-ALL volume and the Autorun will bring up the main setup:

Screenshot 2014-04-14 19.46.39

Pre-reqs check is very easy and should pass with no issues if DNS has been correctly configured and the PC can resolve its own name:

Screenshot 2014-04-14 19.47.44

Next we provide a password for the vCenter Single Sign On facility administrator account.  This is super important as it will be the account you have to use for initial logon to the vCenter:

Screenshot 2014-04-14 19.47.58

Here we can provide a site name.  I just stick with “default-first-site” in the lab, but in a real scenario a properly descriptive site name should be used and follow some reasonable naming convention:

Screenshot 2014-04-14 19.48.38

Here we set the TCP port for the SSO service (I leave the default):

Screenshot 2014-04-14 19.50.41

You can change the destination folder for vCenter if you want or need to:

Screenshot 2014-04-14 19.50.46

With all of the upfront work done we can go ahead and Install:

Screenshot 2014-04-14 19.50.51

Files will copy…

Screenshot 2014-04-14 19.50.55

The installation process is scripted.  At points it will appear to stop and return to the main Install screen.  It is not in fact stopping, but rather the script is still working in the background and launching the next component install.  Be patient until the final notification that all setup is completed:

Screenshot 2014-04-14 19.54.39

Here we can see the next module install (in this case vSphere Web Client) has triggered:

Screenshot 2014-04-14 19.55.25

Now the Inventory Service:

Screenshot 2014-04-14 19.57.52

And the main server itself:

Screenshot 2014-04-14 20.01.02

At this stage we are prompted for our license key:

Screenshot 2014-04-14 20.01.44

And now we must select our vCenter database.  There are two options here.  We can either utilize the included SQL 2008 Express package, which is theoretically limited to 5 hosts and 50 VMs, or we can configure an external data source (meaning a SQL server that we have already installed and have online).  If you take the latter approach, just be sure that you have your SQL authentication setup properly configured (either Windows or SQL auth) and you know which user you will provide vCenter in order to login (should be able to create and own a database).  In my case I opt for SQL Express:

Screenshot 2014-04-14 20.07.39

We can now choose to have the SSO Service sign-on as a service account rather than Local System if we want or need to:

Screenshot 2014-04-14 20.07.45

Great dialogue box here giving us full control 0ver TCP port assignment for the various vCenter network services.  I stick with defaults, your mileage will almost certainly vary:

Screenshot 2014-04-14 20.07.52

Next we size the inventory according to our projected deployment scale.   Small is the right match for almost any lab:

Screenshot 2014-04-14 20.08.01

With the options all set we can go ahead and Install:

Screenshot 2014-04-14 20.08.11

Files will be copied…

Screenshot 2014-04-14 20.08.38

SQL will be installed and configured via unattended script:

Screenshot 2014-04-14 20.09.09

Have patience while it runs…

Screenshot 2014-04-14 20.09.33

You will watch the entire SQL Express install process run lights out:

Screenshot 2014-04-14 20.11.08

When it is complete, vCenter install will continue:

Screenshot 2014-04-14 20.12.48

Still more files will be copied…

Screenshot 2014-04-14 20.13.26

Various configuration tasks will be run:

Screenshot 2014-04-14 20.14.16

Once completed, the services will start:

Screenshot 2014-04-14 20.17.13

Additional components will be installed (in this case Orchestrator):

Screenshot 2014-04-14 20.17.34

Profile driven storage…

Screenshot 2014-04-14 20.19.57

And we’re done!

Screenshot 2014-04-14 20.20.14

 

At this stage the main Installer finally gives us the “all clear”:

Screenshot 2014-04-14 20.20.29

Next I choose to install the optional Update Manager.  Update Manager should be installed on the administrative console that will be used for managing the server farm via GUI.  In my case I tend to run the GUI right off of the vCenter server quite often, so I install here:

Screenshot 2014-04-14 20.20.43

 

Install starts:

Screenshot 2014-04-14 20.20.49

Warning that Update Manager will upgrade hosts and also a chance to setup the first download immediately following install:

Screenshot 2014-04-14 20.21.15

Provide the vCenter creds for Update Manger (note: the SSO Admin creds, or another admin user if you have created an additional one are wanted here):

Screenshot 2014-04-14 22.03.56

Once again we select a data store:

Screenshot 2014-04-14 22.04.19

And once again an opportunity to specify network port and address assignments, this time for Update Manager:

Screenshot 2014-04-14 22.04.43

A change to change the path:

Screenshot 2014-04-14 22.05.01

A warning about disk space if the installation volume is south of 120GB.  I disregard as you can always grow this volume if you need to and my lab won’t exceed 40GB anyhow:

Screenshot 2014-04-14 22.05.24

Files copying, a recurring theme!

Screenshot 2014-04-14 22.06.02

And we are done with Update Manager:

Screenshot 2014-04-14 22.06.08

Next I decide to check out the vSphere Web Client since this has become the official client (the legacy client is being deprecated).  Of course Microsoft chose to annoy admins the world over nearly a decade ago and lock down Internet Explorer to a ridiculous degree by default.  As a result surfing anywhere is a nightmare initially.  First step (for me) is to kill IE Enhanced Security Configuration which is done through Server Manager:

Screenshot 2014-04-14 20.38.15

With that done we can check out the web client.  Note that it is on port 9443 (as per our install configuration) and you will need Flash (boo! hiss! seriously though, this requirement needs to go).  To login you will again need the SSO admin credentials if and until an alternate user is created.  The web client looks really sharp:

Screenshot 2014-04-14 20.40.38

First stop I decide to explore the SSO config and add Active Directory as an authentication provider.  Head over to Roles:

Screenshot 2014-04-14 21.47.31

We can “Add an Identity Source”.  I choose AD as an LDAP server.  You will need to provide domain, DN and context info and syntax is super important.  You can refer to the screenshot to see the syntax requirements and substitute your own domain info for mine when configuring your own lab.  For the login I created a service account, but any domain account that can do a lookup against the global catalog (basically any account) should work:

Screenshot 2014-04-14 22.01.25

And our AD has been configured as an identity source!

Screenshot 2014-04-14 22.09.24

The only thing left to do is configure our base vCenter objects and add our main host to the new vCenter.  Let’s go ahead and walk through this quick and painless process.  For this I revert to the legacy client just because I’m finding it hard to cut that cord and I am less efficient in the new client.  It’s probably good that VMware is taking away the crutch though or I’d likely never learn my way around the new one!  For now we’ll stick with legacy though.  After connecting, we see a pretty blank slate.  The first step is to go ahead and “Create a Datacenter”.  This pretty much just requires choosing a name at this stage:

Screenshot 2014-04-14 22.09.59

With our new datacenter object in place, we can go ahead and “Add a Host”:

Screenshot 2014-04-14 22.10.18

We need our host IP and root login credentials to get started:

Screenshot 2014-04-14 22.10.25

Acknowledge the certificate alert (incidentally running an enterprise PKI and configuring all of the elements to use it and reference an enterprise root would remediate the endless alerts):

Screenshot 2014-04-14 22.10.42

Confirmation that the host was discovered and a chance to verify before continuing:

Screenshot 2014-04-14 22.10.50

Enter a license key (redacted to protect the innocent!):

Screenshot 2014-04-14 22.10.57

Configure lockdown mode if that’s your thing.  I endlessly SSH into hosts so this definitely stays off for me:

Screenshot 2014-04-14 22.11.01

Choose a datacenter to add the host to (we only have one):

Screenshot 2014-04-14 22.11.07

Review all of the info provided so far and finish:

Screenshot 2014-04-14 22.11.12

And that’s it!  Our host and it’s associated resources and VMs have been added to the vCenter and should now be managed through the vCenter interface:

Screenshot 2014-04-14 22.12.23

OK, that’s it for the sidebar.  You’ve seen how vCenter was setup and configured in between the first host installation and the nested ESX configuration where we left off.  Back to the main action!


I know that there have been (literally) thousands of articles written on nested ESX, but I decided to do one anyhow as, over time, I plan to build on this foundation entry with some content that actually will be new and interesting as it relates to hybrid cloud (stay tuned for that).  So with that out of the way, let’s review some basics about “nested ESX”.

What is Nested?

Nested ESX is exactly what it sounds like.  The idea is that you install ESX into a guest VM on a physical ESX host.  What you end up with is hypervisor on hypervisor thereby making the CPU time slicing and overall resource allocation and consumption even more complex.  So why would one do this?  Well as it turns out, this is a fantastic setup for lab testing.  You can basically build multiple virtual datacenters on a single machine and do nifty things like SRM testing.  So certainly not something one would recommend for production, but literally miraculous for labs.

Whats the Catch?

There’s always a catch, right?  Well nested is no exception, although the good news is that “out of the box” support has gotten better and better with each iteration for what started essentially as a skunk works science project.  So today there is no ESX command line hacking required, believe it or not, and ESX is actually recognized as a valid (if unsupported) guest OS.  All of that said, there are some caveats to be aware of.  The first one concerns networking.  To understand what the catch is here we first must consider what is happening in a standard ESX installation:

Image

With virtualization, we have a physical host, running a hypervisor OS, which abstracts physical resources into virtual resource pools and brokers their consumption by guest operating systems.  So the physical uplinks which connect the host to a physical switch are connected through software to a “virtual switch”.  As virtual machines are created and deployed onto the host, they are configured with a set of virtualized hardware.  This hardware is either passed through (hardware virtualization), brokered by special software support in the guest (paravirtualization), in some cases, emulated.  With x86 virtualization, the CPU is time sliced and instructions are passed through.  So “virtual CPUs” are essentially timeshare units on the actual physical CPU.  I have a more extensive article on the various flavors of x86 virtualization that provides more background on these concepts.  Under ESX, networking is interesting in that there are two options.  Using the VMware VMXNET virtual network interface you are using a paravirtualized driver which requires installation inside the guest OS and as a result delivers optimized performance.  Alternatively, the host can emulate the function of the Intel E1000 NIC and trick the guest OS into thinking one of those actually physically exists at a given PCI I/O address range.  Whichever approach you choose, ultimately the virtual NIC will be connecting to the virtual switch.  The diagram above captures the flow.   The key point here is that the relationships are all 1:1.  A guest OS has one (or more) NICs that connect to the virtual switch, but it basically replicates how the physical world would work.  Now consider what happens when it is a hypervisor in the guest OS.

As expected, what happens is a bit of a mess.  Now you have a guest OS virtual NIC being used as the uplinks for yet another virtual switch which in turn provides a connection point for additional virtual NICs that connect guests.  Where we run into trouble is that the foundation host (the physical one) managing the base virtual switch has no idea about any virtual NICs that are provisioned by a guest OS hypervisor.  As a result, this traffic get’s dropped.  In turn, any destination traffic headed anywhere other than the virtual NIC belonging to the guest OS that the host does know about (our “primary guest”) will be dropped.  So what is the answer here?  Well it turns out we really need two things.  First, we need MAC addresses that are unknown to the physical host to be allowed to pass (these are the MAC addresses created by the guest OS hypervisor for its guests).  In addition, we then need a way for all of those guests sitting unknown up in the second hypervisor to participate in the main virtual switch.  Luckily ESX does provide two enabling configuration options that solve both of these problems.  Let’s take a look:

Screenshot 2014-04-15 14.23.10

 

Doesn’t this look promising?  Let’s go through them one by one:

  • Promiscuous Mode - this one is exactly what it sounds like. When enabled on a virtual switch port group, that switch essentially becomes “full broadcast”.   Any VM attached will be able to see all traffic in the port group.  Why is this?  Simply put it ensure that the primary VSS’s ignorance of the existence of MAC addresses upstream from it doesn’t matter.  Since every frame will be broadcast, these frames will hit the virtual port whether the switch intelligence thinks that port is a valid destination or not. In other words this is a sledgehammer fix to the problem.  It would be much cooler if a VSS had intelligence to actually recognize nested and learn upstream MAC addresses, but maybe that is something for the future (or maybe it won’t matter because we will all be on NSX!)
  • MAC Address Change – this setting deals with the problem going the other way.  This setting basically allows the guest OS to do locally administered station address control of its virtual NIC MAC address.  This is us telling the virtual switch intelligence to not worry about it if the MAC address allocated to the guest VM virtual NIC happens to change.
  • Forged Retransmit – a companion setting, forged retransmits basically says that the virtual switch shouldn’t be concerned if MAC address 00:00:00:00:00:0B suddenly shows up at the virtual port where  00:00:00:00:00:0A had originally attached.

Taken as a group, these settings allow traffic to flow out from nested guests (MAC changes and forged retransmits) and the return traffic to flow in to them (promiscuous mode).  So with that networking configuration done, we must be good to go right?  Well not so fast!  There is more that needs to be done for nested to work.  The next complication comes from the configuration of the virtual machine itself.  After all, we are going to be installing a hypervisor into this guest.  These days, the virtual machine monitor is no longer a pure software thing.  Even VMware (the grand daddy of x86 VMMs and last to move away from pure software) now utilizes CPU and chipset support for virtualization – namely Intel VT and AMD V.  As a result, this support (normally obscured from the guest) needs to be exposed to it.  For these options we actually need the vSphere Web Client to configure them (interesting requirement that basically makes vCenter mandatory for nested implementations in one way).  Luckily I do have a vCenter that I put up immediately after the initial ESXi 5.5 install on the physical host.  I documented the setup as a sidebar in case anyone would like to see the latest changes in both Windows and vCenter.

 

If we bring up the settings in the web client for a new virtual machine we are looking for the extended options under CPU:

Screenshot 2014-04-16 23.03.26

What we want here is two things:

  • Expose Hardware Virtualization to the Guest OS:  this means that the guest will be able to identify and access hardware based virtualization support in Intel VT and AMD-V
  • CPU/MMU Virtualization: this setting locked on “Hardware” for both ensures that hardware accelerated virtualization will be provided to this guest for both CPU instruction set as well as I/O MMU operations.  The alternative is “Automatic”, but since we are installing hypervisor on hypervisor we know we will need it

In addition to these settings, once the VM has been created (and it can be in the immediate “configure settings” step that follows initial creation), we can set the OS to correctly reflect our guest.  As we can see here, “VMware ESX 5.X” is now selectable as an OS under “Other”.  This step, incidentally, should alleviate the old need to set hvh.enable=true in the 5.0 and 5.1 days in order to get 64 bit guest on virtual hypervisor to work:

Screenshot 2014-04-15 14.26.52

 

With the above we now have everything we need to get started deploying virtual ESX guests. Installing these, with the pre-reqs done and the caveats in mind, is as easy as deploying any other OS.  If you follow the Windows guest deployment installs, but make sure to address the VM config caveats above, and point to the VMVisor Installation ISO, you will have no issues.  Similarly, adding these virtual ESX hosts to vCenter is exactly as described in the vCenter configuration entry for the physical host.  The behavior is exactly as expected.  One thing I did choose to do was create a dedicated VSS for each virtual ESX host, and assign a dedicated NIC to it.  This is a very straightforward operation from either client by selecting Networking with the Host as a focus and choosing “Add Networking”:

Screenshot 2014-04-16 23.20.30

Because the virtual ESX is really just a guest VM from the primary host view, we select new Virtual Machine Port Group:

Screenshot 2014-04-16 23.21.00

We want to go ahead and Create a New Virtual Switch here since we are dedicating VSS to virtual ESX guest:

Screenshot 2014-04-16 23.21.10

Here we will have an adapter list where we can checkbox assign an available adapter to the new VSS.  In this case my configuration is already complete so no adapters are showing, but this would be a straightforward selection followed by a straight click through on the remaining options (including naming the new VSS):

Screenshot 2014-04-16 23.21.21

 

When building the VM for the nested ESX guest, I attach the network adapter to the associated VSS.  So in my case 4 nested ESX instances map to 4 VSS hosts and 4 physical NIC ports on the host.  Speaking of architecture, this is ultimately what I am targeting as my design:

Lab-Nested

 

Some points worth calling out here:

  • I am planning 4 virtual ESX hosts rather than 3.  They will have 32GB of RAM, 1TB of disk, 4 vCPUs (2 virtual dual cores) and a 15GB SSD (for vSAN).
  • Each virtual ESX will be connected to a dedicated VSS on the main host which will have a dedicated physical NIC
  • all of the virtual ESX hosts will be joined to vCenter
  • a VDS will be configured across the virtual hosts only
  • I plan to ultimately install vCloud Director on top of all of this and configure tenant organizations (the VLAN and network config info up top)
  • vCenter and AD will run on the physical host along with some other core bits (vCenter Mobile Access, maybe one or two other things).  The main host is left with 5TB of disk, 10GB of SSD, 64GB of RAM, and 4 full time CPUs.
  • may separate the 4 hosts into 2 vCenters in order to be able to simulate two sites and do SRM (still debating this)

OK, with the above architecture in mind, let’s go ahead and create a virtual ESX guests.  And *poof*, we’re done!  Cooking show style here seems appropriate, so I will show the final product:

Screenshot 2014-04-16 15.40.53

 

Above we can see the physical host (192.168.2.4) with it’s guests (vESX1-4).  Below it we can see each of these guests represented as hosts in vCenter - 192.168.2.5-8.  The last step for this entry is to create a distributed virtual switch for these hosts so we can get started with some more advanced configuration like VXLAN and vCD.  As a refresher, a virtual distributed switch requires a dedicated NIC to assign as the vDS uplink.  Well in this case, since our hosts are virtual, this is as easy as it gets!  We just need to go into the VM settings for each virtual host and “add” a new “network adapter”.  The only catch is this will require a reboot for the new (virtual) hardware to be recognized by the (virtual) server.  Once complete, we can go ahead and create a new vDS by selecting “Create a distributed switch” with the virtual datacenter as the focus in the web client:

Screenshot 2014-04-16 15.40.53

First we choose a version for our switch (I choose current for maximum compatibility with my scenario which is “testing new stuff”):

Screenshot 2014-04-16 16.32.48

Next we can name the switch and set the number of uplink ports which governs the number of “physical” connections per host (in our case just vNICs, but also largely irrelevant):

Screenshot 2014-04-16 16.32.55

We now decide if we want to add hosts now or later (I choose now – Carpe Diem!):

Screenshot 2014-04-16 16.33.06

Next dialogue we are given an opportunity to select the hosts that will participate and the available NIC that will be connected to the vDS.  We can see here the NICs I added to each host VM (vNIC1) for this purpose:

Screenshot 2014-04-16 17.23.33

Next step is just to commit the config and create the switch.  As we can see below everything went as smooth as can be and the 4 virtual hosts are now linked by a vDS!  We are now about 70% of the way towards our diagram and all of the foundation has been laid, so this is a good stopping point.   Hope you enjoyed the (million and first) entry on Nested and stay tuned for the next entry!

Screenshot 2014-04-16 17.24.15


Two things I’ve always wanted to try in the ESX lab were DirectPath I/O and vSAN.  For the former, I always liked the idea of having a GPU accelerated virtual desktop to use as a jump server and also to test just how good GPU acceleration can be in a virtual environment. vSAN is extremely compelling because I find the idea of highly scalable and efficient distributed file systems based on DAS to be a perfect fit for many cloud scenarios if you can architect an efficient enough capacity planning and resource allocation/consumption model to match.  In the past I never had a test server platform with VT-D (or AMD-Vi), pre-requisites for VMware DirectPath directed I/O, so the GPU scenario was out.  With vSAN it was more about finding the time and catalyst to implement.  As it turns out, the T620 and the new lab effort solve both of these problems!  Before laying down the ESXi install, I decided to do a few hardware tweaks in service of my two stretch goals.  I had two passively cooled AMD PCI-E GPUs on hand (both R800 era… nothing fancy) and I also had a spare 80GB Intel SSD laying around (yes, I have SSDs “laying around” and should probably seek help).  In the case of the SSD, this particular requirement of vSAN (SSDs required to act as a buffer for the virtual SAN) can be bypassed with some ESXCLI wizardry as explained by Duncan Epping, but since I had a spare I figured I might as well use it as long as the server had a spare SATA port.  First step was to open her up (something I knew I had to do at some point anyhow just to see how clean the cable routing is!).  First up is to find the ingress point.  Absolutely fantastic design element here as there are no screws (thumb or otherwise) and the entry point is fully intuitive.  A nicely molded and solid feeling lockable handle right on the side panel.  I unlocked it, pushed down on the release trigger by gripping the handle, and pulled forward.  The door opens smoothly and settles straight down to the horizontal on its tabs.  It can be removed as well if need be:   WP_20140415_23_13_20_Pro   Inside, things are looking clean and really great: WP_20140415_23_14_08_Pro   Another cool thing is that the second the case was open, the intrusion detection tripped and the front panel LCD went amber and displayed the alert.  Very neat: WP_20140415_23_16_01_Pro Of  course the alert can be acknowledged in the iDRAC (which is accessible even with the server powered off – excellent stuff): WP_20140415_23_16_18_Pro Scoping out the interior, I noticed right away that the tool-less design approach applies to all components and that there appears to be two free x16 PCI-E slots (one top and one bottom) as well as plenty of disk shelf space above the array cage, a spare power connector on the SATA power cable going to the DVD drive, and a single free SATA connector available on the motherboard.  So far so good!  First step was to get access to the PCI-E slots by removing the card braces: WP_20140415_23_27_18_Pro The brackets are easily removed by following the directions provided by the arrow and pressing down on the tab while pulling forward.  Once out, there is free access to the PCI-E slots.  The slot clips, also tool-less, can be removed with a similar squeeze and pull motion: WP_20140415_23_28_43_Pro With the slots cleared, it was easy work installing the two GPUs in the roomy case (top and bottom shown with clips back in place): WP_20140416_00_08_50_Pro WP_20140416_00_08_45_Pro Next up was the SSD.  I decided not to do anything fancy (especially since I wasn’t 100% sure this would work).  The server is very secure and isn’t going anywhere and the disk shelves are free and clear and very conveniently placed.  The SSD is small and light so I opted to just cable it up and sit it on the shelf.  Here is a quick pic of the SSD in question before we get into the “installation”.  80GB Intel, a decent performing and very reliable (in terms of write degradation) drive back in the day: WP_20140416_00_13_21_Pro First up, a shot of the one free onboard SATA port (Shuttle SATA cable used for comedic effect): WP_20140416_00_13_11_Pro Next up, a shot of the drive bay area and free SATA power plug with the SSD “mounted”: WP_20140416_00_15_46_Pro And finally, a close up of the SSD nestled in the free bay: WP_20140416_00_16_46_Pro That’s it for the hardware tweaks.  Time to close it up and get started on the ESXi 5.5 install!  As always, this is a straightforward process.  Download and burn VMware-VMvisor-Installer-5.5.0-1331820.x86_64 to a DVD, boot her up, and let ‘er whirl.  Installer will autoload: WP_20140414_00_25_56_Pro   Initial load:   WP_20140414_00_26_17_Pro Installer file load: WP_20140414_00_28_49_Pro Installer welcome screen: WP_20140414_01_05_36_Pro EULA Acceptance: WP_20140414_01_05_50_Pro Select install disk (this is the physical host, so the PERC H710 is the target): WP_20140414_01_06_20_Pro Select a keyboard layout: WP_20140414_01_06_56_Pro Set a root password: WP_20140414_01_07_16_Pro Final system scan: WP_20140414_01_07_31_Pro “Last exit before toll”: WP_20140414_01_09_45_Pro Off to the races! WP_20140414_01_09_59_Pro Like magic (many) second later, installation is complete: WP_20140414_01_32_40_Pro WP_20140414_01_32_45_Pro First boot of shiny ESX 5.5 host: WP_20140414_01_36_38_Pro Splash screen and initializations are a good sign: WP_20140414_01_37_06_Pro As always, first step is to configure the management network (shown here post config): WP_20140414_02_09_41_Pro Interesting to have a look at all of the network adapters available in this loaded system.  Select one to use for the initial management network: WP_20140414_02_08_33_Pro   Provide some IP and DNS info or rely on DHCP: WP_20140414_02_08_46_Pro   Commit the changes, restart the network and give it a test! WP_20140414_02_09_53_Pro Did everything work? Indeed it did, thanks for asking! Screenshot 2014-04-16 13.56.55

How about the SSD and the DirecPath GPUs? Let’s take a look.  First DirecPath because the anticipation is killing me.  From the vSphere client, DirectPath settings are found under the Advanced subsection of the Configuration tab when the focus is a host.  The view will initial display an error if the server is incapable of DirectPath (no VT-D or AMD Vi), or a blank box with no errors or warnings if it can.  From here, we click “Edit” in the upper right hand corner to mark devices for passthrough usage.  The following (very interesting) dialogue box pops up:

Screenshot 2014-04-16 00.34.14

Here we can see all of the PCI devices installed in the system and recognized by ESXi.  In the list we can see the AMD GPUs and their sub-devices.  We are also able to select them.  So far so good!  Click the checkboxes and you will get a notice that the sub-devices will also be selected.  Acknowledge and click OK.  We can see that the AMD GPUs have been added and will, in theory, be available for assignment pending a host reboot (yikes):

Screenshot 2014-04-16 00.34.39

 

Following the (long) reboot cycle, return here and find that the GPUs are in fact available for assignment.  Hallelujah!  I am not going to assign them to a guest yet, but we will revisit this when I create the Windows 8 jump VM:

Screenshot 2014-04-16 13.01.54

So far so good.  The DirectPath seemed like the more complicated mission, so I am feeling a bit cocky as I move forward with the SSD configuration.  Of course, as always in technology, that is exactly when Murphy’s Law chooses to strike.  As it turns out, I had forgotten that the last time I used this SSD it was part of a GPT RAID 0 array. As a result, it has an extremely invalid partition table.  ESXi can see it, but errors out attempting to use it.  I decided to see how things were looking from the command line view.  Of course as always, that means first enabling SSH.  The first step is to set the focus to the host and head over to the Security Profile section of the Configuration:

Screenshot 2014-04-16 00.40.45

Select the SSH service under Services and click Properties in the upper right corner.  This will invoke the Services Properties dialogue where we can highlight the SSH service and select Options.  :

Screenshot 2014-04-16 00.41.08

In the following dialogue box we can start the service as well as configure its future startup behavior:

Screenshot 2014-04-16 00.41.12

Next up its time to hit up PUTTY.  Of course on first connect we will get the SSH certificate warning that we can just acknowledge and ignore:

Screenshot 2014-04-16 00.41.43

At that point after a quick root login we are in.  The first step is to find out how the system views the SSD device.  The best way to do this is with the ESXCLI storage enumeration command esxcli storage core device list:

Screenshot 2014-04-16 00.46.07

Wow, the SSD has a pretty odd device header!  That’s ok though, this is why copy/paste was invented!  Device in hand, and knowing that this disk has a GPT partition, I give partedUtil a try. Unfortunately  partedUtil isn’t interested either as it reports that “ERROR: Partition cannot be outside of disk”.  My luck this was the first disk in the span set and so is the one that has a “too big” partition table (the partition table for the span set).  After rebooting into various “live CD” and “boot repair” tools I have onhand, and failing miserably for various reasons (inability to recognize the Dell onboard SATA, confusion over the system having 3 GPUs, inability to recognize the onboard GPU, etc) I finally had a brainstorm – the trusty ESX installation DVD!  Sure enough, the ESXi 5.5 install was able to see the SSD and was perfectly happy nuking and repartitioning it.  At that point I had an ESX 5.5 installation partition structure (9 partitions!) hogging up about 6GB of space.  On an 80GB SSD that’s a lot of space, so I went back to the ESX command line to try partedUtil.  This time things went much better!

partedUtil get /dev/disks/t10.ATA_____INTEL_SSDSA2M080G2GC____________________CVPO012504S9080JGN__

This command returned the partition info.  A list of 9 partitions with starting/ending locations enumerated

partedUtil delete /dev/disks/t10.ATA_____INTEL_SSDSA2M080G2GC____________________CVPO012504S9080JGN__ N

With this command I was able to go ahead and delete the partitions iterating through 1-9.  Once deleted, the vSphere client was able to easily add the new datastore.  With the host as the focus again, under the Storage subsection of the Configuration tab, we can now Add a new datastore.  We’re adding a local disk, so we select Disk/LUN:

Screenshot 2014-04-16 00.36.11

Next up we select it and we can see the SSD here with its wacky device name:

Screenshot 2014-04-16 00.36.17

Next we select a file system (I’m going with VMFS 5 to keep all datastores consistent and because I plan to do vSAN):

Screenshot 2014-04-16 00.36.23

Current Disk Layout is where things errored out the first time through when the partition table was wonky.  This time we sail right past both Disk Layout and Properties (naming it “SSD”) with no errors.  For formatting, I choose to allocate the maximum space available:


Screenshot 2014-04-16 13.01.27

With everything looking good, we can click Finish to create the new datastore:
Screenshot 2014-04-16 13.01.33
And voila!  One shiny new datastore online and ready for later experimentation!


Screenshot 2014-04-16 12.51.04

Well that’s a wrap for yet another entry in the series!  We now have a fully function ESXi 5.5 base host with a 9.2TB of primary RAID based DAS for VMs and a secondary 80GB SSD datastore that will be used to support vSAN.  DirectPath is ready to go for the VDI guest.  Next up is the nested ESX installation followed by the VDI install.  Stay tuned and thanks for reading!

 

 


Last entry we got to the point where we had swapped out one of the 1TB Toshiba 7.2k SATA 2 drives for a Western Digital Red 2TB SATA3 drive and were ready to power on the server to see what would happen.  My assumption based on past experience was that the PERC H710 would complain (a lot) at best, or outright block the disk at worst.  Here is how things turned out:

Initial boot actually seemed to go well with no complaints evident at all!  Front lights were all green and nothing was showing in the PERC logs, nor were there any warning comments being thrown.  Inspired, I went ahead and replaced the other 5 drives, then prepared to do a real configuration of the RAID card and see if the behavior stayed consistent.  This was my first time configuring a PERC and I was pleasantly surprised by how intuitive the config went.  I captured it on video rather then taking screen shots:

Really a great outcome!  9.2TB RAID 5 volume online in no time at all with no warnings and solid green lights.   The addition of the T620 into the home office necessitated a full redesign of equipment placement due to space returns.  I initially considered keeping my 4 slimline white box hosts, but after seeing how cool and quiet the T620 was at first boot, I made the decision to stick with a nested ESX/one-box configuration.  This decision went a long way towards reducing the clutter and made for a pretty decent final layout.  Here is a shot of the full set of lab kit tidily stacked together awaiting installation:

 

2014-04-13 00.51.46

 

And here is a shot of the server nestled under the wife’s desk along with the fully cabled switch, firewall, NAS and UPS:

2014-04-14 20.50.38

At this point, with all of the equipment fully installed and in-place, I was satisfied with the room layout and so I decided to go ahead and explore the T620 BIOS as well as setup the iDRAC 7 in preparation for the OS install.  Rather than do a video, I took screenshots for these so I could more easily insert some written commentary.  First up lets have a look at the initial power on screens.  On power-up we are greeted with an update letting us know that the server is “Configuring Memory”.  I assume that this is just the usual memory count, quick parity check and initialization as even with 192GB of ECC RAM it passes fairly quickly (10 seconds or so):

WP_20140413_19_26_53_Pro

 

With the memory subsystem initialized, next up is the iDRAC board.  Extremely cool that the iDRAC is initialized as early as possible during POST (very useful for an out-of-band management lights out board!):

 

WP_20140413_19_27_20_Pro

Next up a nicely comprehensive report on CPU and memory configuration.  I really like how verbose this is including a voltage report for the RAM.  In addition here you  can see your pre-OS options – F2 for System Setup, F10 to invoke the enormously cool LifeCycle Controller capabilities which are a part of the iDRAC 7 and allow you to do full bare metal setup (including BIOS settings) remotely via the iDRAC, F11 to bring up the BIOS boot menu and prompt for boot device selection and finally F12 to invoke PXE Boot via the NIC ROM.  Speaking of which the last bit of info we can see here is the PXE boot agents header as well as the SATA AHCI BIOS header:

WP_20140413_19_27_39_Pro

 

For this boot I selected “System Setup” which, after about 15 seconds thinking or so, launches the GUI based (with mouse support) main setup menu from which BIOS, iDRAC and Device settings can be reviewed and configured:

WP_20140413_19_30_42_Pro

Starting off with the System BIOS we can see a fairly intuitive list of settings groups – System Information, Memory and Processor, SATA, Boot and Integrated Devices, Serial, System Profile, Security and last but not least Miscellaneous:

WP_20140414_00_02_44_Pro

 

Let’s take a deeper look at a few of these starting with the Integrated Devices Settings… Most of these are self-explanatory (USB, NIC, etc), but a few are worth some additional discussion.  I/OAT DMA Engine is an Intel I/O Acceleration Technology which is part of the Intel Virtualization Enablement technologies and provides increased network interface throughput and efficiency by allowing direct chipset control of the NIC.  The I/OAT DMA setting enables this capability allowing direct memory access from the NIC via the chipset.  Of course in order for this to work, the host OS has to be aware of it.  The default setting is disabled and, considering the current VMware support statement for this capability, I opted to leave it that way. SR IOV or, “Single Root IO Virtualization” is another peripheral virtualization technology, this time controlled by the PCI SIG, which allows a single PCI device to appear as multiple devices.  Examples of where this technology has been commonly implemented is in advanced 10Gb/s adapters allowing them to virtualize themselves and present multiple discrete interfaces from a single physical to the host OS.  Once again the default here is disabled and I opted to leave it that way.  Last but not least, Memory Mapped I/O Above 4GB is pretty standard stuff for 64 bit PCI systems and allows, as it implies, 64 bit PCI devices to map IO to 64 bit memory ranges (above 4GB):

 

WP_20140414_00_06_40_Pro

The next interesting grouping is the System Profile Settings.  Lot’s of great goodies in here including some of the usual suspects like memory voltage, frequency and turbo boost tuning.   The most interesting aspect, however, is that the top line allows for quick profile setting via template using some pre-defined defaults.  In my case I am most concerned with power and thermal efficiency so I set my configuration to “Performance Per Watt”.  It’s great how much granularity is provided for control of CPU and memory power efficiency:

WP_20140414_00_06_00_Pro

Next up are the Security Settings.  Here we can set the basic system access and configuration passwords as well as control the Trusted Platform Module (TPM) state and settings:

WP_20140414_00_06_26_Pro

Leaving most settings at either their default or “power efficient” values I left the System BIOS Settings behind and moved on to the iDRAC configuration.  Up top you get a summary of both the settings and firmware versions as well as the option to dive deeper on the Summary, Event Log, Network settings, Alert status, Front Panel Security settings, VirtualMedia settings, vFlash Mode, LifeCycle Controller, System Location, User Accounts, Power and Thermal settings:


WP_20140413_19_30_52_Pro

The System Summary section gives you an opportunity to set some basic asset information including Data Center location Name, rack position, etc.  Super handy if you are centrally managing hundreds (or thousands) of iDRACs across a global footprint (cue salivating Dell reps!).   For a single server home lab setup it isn’t super relevant, but I set some info down just for fun and testing:

WP_20140413_19_33_33_Pro

We also have an opportunity to create iDRAC users and assign roles.  This is critical as these will be the credentials you will use to access the iDRAC remotely via its web portal or other remote interfaces.  Privilege levels are Administrator (full control), Operator (limited task management) and User (restricted access):

WP_20140413_19_33_45_Pro

The System Event Log menu, as expected, provides visibility into the iDRACs log:

WP_20140413_19_31_22_Pro

As we can see here on a new install the log is pretty sparsely populated!

WP_20140413_19_31_04_Pro

The Network Settings menu is the diametrical opposite of the System Event Log; lots and lots of fun levels to pull here.  Up top you can enable network access for the iDRAC and, if you have an Enterprise class board, configure it to use the onboard NIC for access.  In addition one of the onboard Ethernet ports can be used as well.  The usual Ethernet settings are here along with the option to register the iDRAC board with a DNS:

WP_20140413_19_31_51_Pro

Of course it is helpful to be able to set a name for something configured to register in DNS and we are able to do that here as well.  In addition we can provide a static domain name or allow auto configuration as part of the DNS registration process.  We can also do all of the expected IPV4 and IPV6 configuration:

WP_20140413_19_32_09_Pro

 

Last up we have two really cool options: IPMI over LAN and VLAN configuration.  VLAN configuration of course allows us to configure 802.1Q tagging which is super important for any management device (which in production will almost certainly sit on a management VLAN).   IPMI over LAN allows the iDRAC to participate in an Intelligent Platform Management Interface based console implementation:

 

WP_20140413_19_32_23_Pro

Under Alerts we are able to set SNMP trap destinations for platform events generated by the iDRAC:


WP_20140413_19_31_39_Pro
Thermal settings allow us to set a thermal profile for the board or control the fan independently if we prefer:

WP_20140413_19_32_42_Pro

Power Configuration is a really powerful option which provides the capability to set hardware level power capping for the system as well as set the configuration of the redundant PSUs (up to 2).   Used in conjunction with Dell OpenManage and vCenter DPM the iDRAC power capping capability can be used to keep server energy consumption at predictable levels and shift resources around as needed when those levels are exceeded in order to maintain the balance.  Ultimately this can get as aggressive as choosing to shutdown VMs if needed in order to stay within a “power budget” if architected correctly:

WP_20140413_19_33_21_ProWP_20140413_19_33_07_Pro

 

Front Panel Security provides a really rich set of physical security controls for cases where the server is sitting in a real production environment.  One neat item is that you can control the message displayed on the front panel LCD:


WP_20140413_19_34_12_Pro

 

With the iDRAC all configured and the network interface online and cabled up, we can now access the excellent web based interface.  Login page is clean and quite slick:

Screenshot 2014-04-13 19.58.46

System Summary provides huge detail at a glance as well as hyperlinks for deeper dive info for individual subsystems.  You can also see the thumbnail snapshot of the virtual console here.  The virtual console is absolutely a killer feature of the iDRAC and provides hardware backed remote console support outside of any OS allowing for bare metal configuration of management even headless:

Screenshot 2014-04-13 20.00.05

Fantastic view of power consumption both in real time and historically:

Screenshot 2014-04-14 18.12.57

In keeping with the common theme, another fantastically detailed view this time focused on the disk subsystem both Physical:


Screenshot 2014-04-13 20.21.04

And Virtual:

Screenshot 2014-04-13 20.21.47

Last but not least, the bells and whistles.  Configuration options here for the front panel LCD. Set to display real-time power consumption:

Screenshot 2014-04-15 20.44.22

 

With the iDRAC rockin’ and rollin’ there is only one top level System Setting sub-menu left – Device Settings.  Here we can see enumerated the installed and recognized devices in the system as well as access additional configuration detail.  In this build we have the iDRAC, Intel NICs both add-in and embedded, the Broadcomm 10Gb/s NIC and the PERC H710: WP_20140413_19_37_53_Pro
Speaking of which, let’s have a look at the PERC H710 setup.  On first boot we determined that the T620 had no problem with the Western Digital Red 2TB OEM disks and, as you can see in the iDRAC physical disk ‘spoiler’ screen above, even recognizes them at 6Gb/s!  With a full boat of 6 drives on-board it’s time to make a virtual disk.   CTRL-R during POST invokes the RAID BIOS setup utility.  I decided to do a quick video of the initial setup:

And here are some static shots of the successful configuration.  All six physical disks displayed along with the 9.2TB virtual disk:


WP_20140413_19_28_18_Pro

Detailed view of the Physical Disk management tab (product ID correct, no warnings to be found):

WP_20140413_19_28_29_Pro

 

Controller Management tab allowing you to enable the BIOS and control its behavior as well as set the default boot device for the RAID controller:

 

 

WP_20140413_19_28_38_ProController properties including temperature.  Running at 49C.  Seems a bit warm, but certainly acceptable:

WP_20140413_19_28_46_Pro

And that pretty much wraps up this entry!  We now have a fully installed and configured server with out-of-band management ready to go and a nice juicy virtual disk awaiting OS install.  Next up, installing ESXi 5.5 so stay tuned!


It has been an incredibly busy few months, but this past holiday season something pretty juicy arrived under the Complaints HQ tree:

 

Nothing says the holidays like corrugated cardboard and wooden palette!

Nothing says the holidays like corrugated cardboard and a wooden palette!

That’s right, for the first time a first class piece of server kit is joining The Beast in the home lab!  Previous entries have covered the fun times had at Complaints HQ with various white box adventures, but now we’re taking things to a whole other level.  As a home office based traveling architect evangelizing hybrid cloud solutions to enterprise customers, it is becoming increasingly important to teach by example.  The idea of the newly improved, Hulked out, home lab is that I can build a hybrid cloud enterprise integration technology showcase and then activate it and demonstrate it remotely on demand as needed.  So what is hiding in the somewhat drab brown box emblazoned with the trusty Dell logo? Glad you asked!

                                    • Model: Dell T620 Tower
                                    • CPU: 2 x Xeon 2650L V2 – 10 core, 1.7Ghz base, 2.1Ghz Turbo
                                    • RAM: 12 x 16GB 1333Mhz LVRDIMM – 192GB total RAM
                                    • NIC: Intel Gigabit 2P I350-t Adapter (dual port), Intel(R) Gigabit 2P I350-t LOM (onboard), Broadcom NetXtreme II 10 Gb Ethernet BCM57810 (dual port 10Gb/s)
                                    • PSU: 1 x 750W
                                    • RAID: Dell PERC H710, 512MB NV cache
                                    • HDD: 6 x 1TB 7200RPM SATA II (I swapped these out, more on that later)
                                    • ILO: Dell iDRAC 7 Enterprise

So without a doubt this is a serious piece of kit sporting 20 2Ghz cores (under load), and nearly 200GB of RAM, alongside terabytes of hardware RAID DAS and 25Gb/s of bandwidth.  The best part is that this is also a low voltage build resulting in a datacenter class server that can run off a 750W high efficiency supply, idle at 140W, and run quieter than the 4 white box servers it replaces.  Not bad and a definite respectable companion to The Beast!  In honor of this new plateau I’m also introducing a new format to the blog; build videos!  I hate few things more than the sound of my voice (my appearance is one though), so I don’t expect to use this option all that often, but in this case it seemed appropriate.  As a result these entries will feature the usual photos and screenshots alongside some video.  So cracking this baby open must be like opening the Ark of the Covenant, right?  A cascade of purifying light spilling out…  Riches beyond imagining?  Indeed, behold!:

 

WP_20140413_19_15_59_Pro

All kidding aside, the included accessories, manuals and “fluff” are something short of inspirational.  The usual sub $1 keyboard/mouse combo alongside 3 really flimsy paper manuals and a power cord.  In all fairness though, this is an enterprise class machine and ultimately this is stuff that would either get stored in a closet or tossed.  What really matters is what is sitting underneath that spartan “accessory box” and in this case, Dell did not disappoint.  The initial unveiling of the server and the physical build aspects bring us to the video content.  First up is the dramatic unpacking of the supplemental accessories box:

 

 

The caster assembly is actually light weight and small.  It could fit pretty easily inside the main crate, but it is a separate and optional SKU and so ships separately.  I decided to borrow the “cooking show” format and skip providing video of cardboard being torn and heavy equipment being lifted.  Inside the main carton is the accessory box containing the few items show in the picture above as well as the actual server nicely lodged in firm charcoal colored foam.   Next up, a quick hardware tour of the outside of the server itself:

 

It doesn’t fully come through in the video, but the build quality of this thing really is excellent.  The case makes for a great companion to the very high quality aluminum of my main PC’s Lian Li PCA77F.  It feels solid, looks solid, and yet is compact and not incredibly heavy when populated with a single 750W supply and 6 3.5″ HDDs.  It is also fairly intuitive at first glance as evidenced by the installation of the casters:

 

 

I alluded up top to a pre-installation upgrade I decided to do.  We originally spec’d this server with the cheapest disk option in order to keep budget under control and because we plan to pair the server with a NAS.  As a result it shipped with 6 7200RPM 3.5″ SATA 2 drives.  These are standard Toshiba 7.2k SATA disks branded as Dell.  Since I plan to run this server in a home office alongside the (fairly loud) McAffee UTM SG720, Netgear GSM7224V2 and (less loud) Netgear ReadyNAS Ultra 6 and The Beast I wanted to drop these in favor of something more acoustically friendly as well as less power hungry.  My ReadyNAS has a full complement of 5900RPM Seagate Green LP Barracuda drives which I like a lot, so I decided to give the Western Digital Red line a whirl for this build to mix things up a bit.  I picked up 6 of the OEM 2TB drives on sale at MicroCenter for a decent $99 per drive.  At twice the capacity and 6Gb/s SATA vs 3Gb/s (noticeable in a RAID config even with slow drives like this) while delivering data at 10dba and 6W less than a 7200RPM per drive they make for a compelling upgrade and a good match to the LV RAM and CPUs:

One more video of the physical server, storage subsystem in this case, before moving onto the first boot and the initial logical configuration:

This is a good point to break.  The next installation will cover the setup of the PERC H710 and the Array, will answer the question if Dell is still being a pain in the ass with drives that aren’t on the HCL, and will also cover the iDRAC 7 enterprise setup before moving on to the first boot and the ESXi 5.5 install.  Thanks for reading and stay tuned!


NVIDIA rolled in the New Year with a driver update! We are now at 332.21, so I figured I’d re-run some of the last benchmarks in 4320×2560 portrait surround. I ditched the 3D Vision Surround + Accessory display setup, so unfortunately I can’t test if the accessory display problem is gone. I can confirm, however, that it is an accessory display issue since 3 panel surround has been working just fine through all of this testing.

With no further ado, here are the BioShock Infinite Numbers:

Per Scene Stats:
Scene Duration (seconds)  Average FPS  Min FPS  Max FPS  Scene Name
32.61 75.27 11.89 577.59  Welcome Center
6.63 79.54 23.99 154.29  Scene Change: Disregard Performance In This Section
22.3 78.75 21.27 388.16  Town Center
8.15 78.54 15.12 483  Raffle
9.08 115.14 25.9 424.62  Monument Island
3.03 130.59 41.81 410.39  Benchmark Finished: Disregard Performance In This Section
81.81 83.41 11.89 577.59  Overall

Wow!  Very nice improvement here!  Overall average has jumped from 76+ to 83+!  That’s a near 10% gain from just a driver rev!  So far so good.  Next up I decided to run Metro again.  Same settings (Note: retesting DirectX 11):

Screenshot 2014-01-07 12.24.30

 

And the results:

Screenshot 2014-01-07 12.28.52

Not a big deal here.  DirectX 11 testing under 331.83 yielded a 54 fps average so we have had a gain of 2 fps.  Probably above the normal margin of error, but not having any great impact.  I think with Metro we’re seeing the limits of the game engine.  Getting significantly better results at this stage is likely to just require more brute force.

I will continue to do some additional testing and update as the results come in!