With public cloud infrastructure as a service adoption rate accelerating even in the traditional enterprise state, comparative performance measurements between providers is becoming increasingly important to architects and developers. Ultimately the smart money is on a multi-cloud strategy with smart orchestration and an SLA/service-centric view of your organization, but knowing what you’re buying is an important part of making economic based decisions around which platform to leverage at a given time or in a given scenario.
Test Setup and Overview
So with this background in mind, I decided to take a deeper look at the current market leader in IaaS, AWS EC2, and compare it against the newest player in the mix, VMware’s vCloud Hybrid Service. I am fortunate to have access to vCHS for testing purposes and Amazon makes a free tier available which, while limited, is perfectly useful for testing the low end of the spectrum. While not perfect, testing at this level is valuable as long as care is taken to make the test as equivalent as possible.
With my concept in mind and credentials in hand, I set off down the roads of the two competing platforms to document not only the performance, but the overall experience. For my testing, I settled on the following mix:
- OS: Windows 2008 R2 – I wanted to use Windows for the testing since it is relevant to such a large number of enterprise customers
- Benchmark Suite: PCMark 8 v2- I will add to this list, but I wanted to start with a comprehensive suite that has an extremely deep results database
- System Info: CPU-Z – to take a deeper look at what the platform is providing for compute, I settled on my old favorite CPU-Z
- Virtual Hardware: this one is interesting because the two platforms take a dramatically different approach here. Amazon provides hardware configuration for you in a variety of “instance sizes”. For free, the biggest instance you can get is a “T1.micro”. With vCHS VMware sells you blocks of capacity in either multi-tenant (“Virtual Private Cloud” – 5Ghz and 20GB RAM base to allocate to VMs) or dedicated (“Dedicated Cloud” – 30Ghz and 120GB RAM to allocate to “Virtual Datacenters” from which capacity is then allocated to VMs). Based on the limits of the AWS free tier, the T1.micro instance’s hardware mix set the baseline for the test:
- CPU: 1 vCPU at 1.8Ghz (Sandybridge era) This is a very tricky baseline to set unfortunately since the whole point of cloud is that hardware detail is abstracted out. Compounding this is that with vCHS, as mentioned above, you carve vCPUs out of a Ghz pool. If you only provision a single CPU from your pool, you have lots of potential for massive burst performance which will obviously only ramp down as additional vCPUs are provisioned into VMs. Still, with access to a vCHS Dedicated Cloud for testing I was able to carve up a very small single VM Virtual Datacenter of 2Ghz for this test. Definite points for flexibility to vCHS here although not a negative per say for AWS since the models are so dramatically different.
- RAM: the T1.micro gives you 618MB of RAM which makes Windows 2008 R2 quite an interesting science project (it does work though). For vCHS I gave the VM 640MB RAM
- Storage: for AWS the T1.micro free tier instance running Windows 2008 R2 comes with a 30GB EBS standard disk. This is network attached block storage using Amazons proprietary scheme for EBS and backed by RAID 1. For vCHS I gave the VM a 30GB VMDK on the SSD accelerated storage tier which is a tiered RAID protected storage model that should provide higher IOPs than EBS generally but does not allow for prescriptive IOPs assignment the way PIOPs does.
- Network: single standard 1Gb/s (in theory) virtual NIC for the instance and for the VM
Creating the machines on either platform is a great “eureka” moment for any cloud skeptic. It is incredible how effortless it is to grab capacity with nothing more than a browser on either platform. I took some screen shots of the process, but first here are the results of the “VM creation time” test. Keep in mind that with AWS the “time to cloud” is instantaneous. What this means is that you click through a sign up process no different than any standard web service registration and then can immediately start launching instances. With vCHS it is a more enterprise centric approach for now that requires a purchase process. At this stage you cannot simply visit the website and be up and running in minutes. That said, once provisioned, the “time-to-VM” is quite comparable so this was what I measured:
Time to VM/Instance Results:
- vCHS Time to VM: 2:30
- EC2 Time to Instance: 3:30
vCHS scores a victory here! The time to bring the VM online was noticeably quicker. EC2 time to instance was variable over a few runs with 3 minutes 30 seconds being the best time. Two other creation attempts were actually a bit slower. OK those are the numbers, but how was the experience? I’ll let the screenshots tell the story here. First, AWS:
Very simple click through console experience! The first image shows the basic EC2 console view with 2 instances provisioned. The second image is the first step of the “Launch Instance” process presenting the standard EC2 catalog from which an AMI (OS gold master essentially) can be selected. Huge depth here. Next up is Step 2, the Instance Selection dialogue, where you choose the instance size. No choice is given since we selected “free tier” which only allows T1.micro. Step 3 allows us to configure instance provisioning details. Tons of powerful options here, all out of scope for this discussion. Step 4 we add storage. Again prescribed by our service tier. Step 5 we can apply some metadata and “tag” our instance. And finally in Step 6 we assign a Security Group (or create one) which is a hypervisor level firewall protecting the instance at the network level. So what is the process like with vCHS? Let’s take a look:
Fairly similar experience overall. The first 3 screenshots are quite different from anything in AWS as they cover allocating a block of capacity to a Virtual Datacenter. In this case I am reducing my allocation from 5Ghz down to 2Ghz to allow for the constrained test. Next up is the catalog view of vCHS following the click through from “Add a VM” to selecting Windows 2008 R2 standard. As we can see here this will be a cost item. Worth noting that AWS provides Windows on the free tier. Next we set our options for the virtual machine in one spot (compute, storage and RAM), as well as connect it to a network, and then click “Deploy the Virtual Machine” to create it. With vCHS, the networking and security configuration happens in a separate part of the UI and is a bit more aligned with what traditional vSphere administrators, or network administrators for that matter, might expect. Within the Network Configuration sections of the vCHS UI you can setup firewall and NAT rules (vs the subnet ACL or hypervisor level security group controls in EC2) at the virtual gateway as well as create up to 9 defined private subnets off of that gateway to which VMs can attach. In EC2 private IP space is allocated at the CIDR block level within a VPC and a Virtual Private Gateway, as well as a virtual router internal to the VPC, and a NAT that can be added during VPC creation, all operate fairly transparently. Overall I would say that vCHS networking is more flexible and definitely a more direct match to legacy skill sets whereas AWS networking is simpler for those who don’t really care that much about the details of networking and just want to get their services communicating (read as developers).
So What’s Under the Hood?
At this stage our Windows servers are up, so what did CPU-Z find? Very interesting results actually. First up EC2 T1.micro:
Sandybridge EX, Xeon 2650@2Ghz running at 1.8Ghz with a bus speed of 100Mhz
Next up let’s have a look at the vCHS VM:
Sandybridge EX, Xeon 2660@2Ghz running at 2.1Ghz with a bus speed of 66Mhz
Why is there a difference in the perceived bus speed of the vCPU? Not sure actually, but it may be a difference in how ESXi presents to the OS vs Xen. In any event, the benchmark results will ultimately tell the tale of the tape here. Next up, let’s take a look at what the network performance was like downloading the (massive) 2.9GB PCMark 8.0 file.
Network Download Performance
Unfortunately I was not able to pull the package from the same mirror for both servers, but what I did do was choose the highest performing mirror that each server was able to contact. Here is how they stacked up. First up EC2 downloading from Tech Powerup We can see here a 2.85MB/s sustained rate. Not bad for free actually:
And vCHS downloading from Gamers Hell. Huge bandwidth here! 9.5MB/s sustained!
The vCHS VM was able to take full advantage of the empty gateway (only one VM behind it) and consume in excess of the allocated 50Mb/s out to the internet. Super impressive result, and a clear victory, but worth noting that this is compared to the AWS free tier and technically you can launch as many of these free instances as you want. As additional vCHS VMs become active within the dedicated cloud, they will share that bandwidth. Of course bandwidth can be added a la carte, so once again the offerings are not really directly comparable in terms of consumption models.
PCMark 8 Install and Setup
OK, PCMark has been downloaded, so let’s install it. The installation goes as expected with no hiccups and is actually not noticeably slow on either machine which is impressive considering they have sub 1GB RAM and are running 2008 R2. Quick shots of the install just for reference:
For the actual tests we are going to run the “Work Test” and the “Storage Test”. The other tests require hardware accelerated video which we do not have and are less relevant anyhow since they focus on consumer workloads like gaming and multimedia. In addition, the Work test offers options for “Accelerated”, which leverages OpenCL (and again we have no GPU so not relevant) or “Conventional”. I opted for Conventional which aspires to profile baseline performance:
The series actually takes quite a long time to run! Here are two shots of the action in progress: