With all of the hardware up and running, it was time to get vSphere running and see what we could get going on these two new ESXi hosts. First step, as always, was some console configuration. I start with setting a password:

Next up is the management LAN config. I decide to leave VLAN configuration aside for now as the RealTek NICs on my main PC, and the Cisco SLM2008 config, are still an unknown. IP is standard stuff:
With the management LAN up and running, it is time to surf on over to one of the ESXi hosts and download vSphere. I configure the secondary NIC on the main PC for the 192.168.2.0 network and hit up ESXi1:
Welcome page looks pretty familiar. Kick off the vSphere download and after a few seconds, it is time to kick off the install. Recurring theme… 5.0 looks a lot like 4.x, no major changes:
With vSphere in place, it is time to start managing the host. I am immediately warned that the clock is ticking on the 60 day eval and 59 days are left. I don’t have a key handy for 5.0, but this can be worked out for later, so lets set it aside.
Next up, though, is one of those really embarrassing, “reluctant to admit it”, RTFM moments. Remember that one of my big goals was experimenting with GPU acceleration and DirectPath? Well…
It turns out that while the G43 chipset brief showed “IOMMU remapping” (AKA VT-D) support, that was actually for the chipset family. G43 express supplemental actually had a matrix which showed capabilities across the family and… wouldn’t you know it…No VT-D. Clearly listed as not a feature. To make matters worse, the E5500 actually doesn’t support VT-D either. I knew VT-D had dropped later in the Core 2 lifespan, but I also assumed it had been applied to the entire line at some point. You know what they say about assumptions!
So with no VT-D, and VT-D requiring pretty much a complete replacement of both motherboard and CPU, what would any sane hobbiest do? Forge on ahead without playing with GPU acceleration in VDI? Perhaps… I’m not a big supporter of sanity though, so it’s time to take a slightly divergent path:

My GPU acceleration entry covers how HyperV and ESX/ESXi differ in this area. HyperV actually doesn’t have support for VT-D yet since, in the slim hypervisor/parent partition model of HyperV and XEN, IOMMU remapping is actually kind of problematic. You’re taking a PCI/PCI-E bus slot away from the parent OS and giving it to a child/guest. Instead, HyperV works via the paravirtualization route through the Calista technology MSFT acquired. That sounds fun too. As does getting ESXi working as a guest in HyperV. So onward ho! Next episode… HyperV divergence!