Project SandyBridge-E… The insanity escalates!


Being a tech addict bears some surprising, and depressing, similarities to some other more, ahem, chemical addictions.  The dopamine rush from procuring new gear can be truly intoxicating.  So its no surprise then, that given the events chronicled injob  the GTX680 adventures, the slippery slope was primed and greased.  With the case open, and the new case door and 4GB 680s on deck, it seemed almost wrong not to ratchet the complexity up by an order of magnitude or five and just replace everything.  Really! It does make sense! Ah, the sad rationalizations of an addict…

I have to say that in my entire history of upgrades (and I’ve been building Intel boxes since the 8088 era), this is the one I struggled with the most.  I can say I nearly stayed clean.  Its not that Intel did such a bad job with SB-E and X79, its more that they did such an incredibly good job with X58 and Westmere. Its extremely difficult to identify a reasonable use case that can tax a 6 core 980X at 4Ghz+, a speedy SATA 2 SSD and a GTX680/Radeon 6970 or two. Gaming is usually the driver for ever increasing horsepower, but at single monitor resolutions up 1080p we’re looking at seriously diminishing returns. Some of this is common sense. Pixel density and 3D pipeline complexity iterate more slowly than Moore’s Law. Also developers make most of their money on consoles so targeting top end hardware would result in software in need of serious detuning to run well on an XBox360 or PS3. Then you also have to consider Intels position. Intel is dominant in traditional micro computing, but this is no longer a growth space. Mobility, where low power, low cost and high efficiency reign is the next market and this is a battleground where Intel has consistently failed to capture share. It makes sense then that investing heavily in massive enthusiast desktop chips is a less than attractive proposition for shareholders. Intel badly needs a mobile play and rival AMD, with a strong on die GPU courtesy of ATI, is potentially dangerous competition. In the meantime Qualcomm, Samsung and ARM continue to gobble share. As a result the past tick and tock, and the forthcoming tick and tock, have a clear focus on energy efficiency and process efficiency (onboard GPU, removing last remnants of the Northbridge, etc) rather then huge IPC or IU density gains  The one area that has remained a bright spot for Intel has been server, but even here they are pressured by diminishing margins and a demand for increased power and thermal efficiency as ODMs selling to cloud players continue to represent an increasingly large piece of the x86 server pie and traditional premium server players continue to fall by the wayside.   As a result of all of these converging trends, there just really hasn’t been a big need for Intel to push the IPC envelope too far beyond the already impressive place that Nehalem left us.  SandyBridge is nice, but it certainly isn’t the huge leap that the first Core i7 was.

All of that said, the CPU is only one part of the computing puzzle and there are other areas that have pushed forward since 2009 that do bring some significant value.  For my particular focus, which is real time 3D in surround resolutions, GPU power actually still has a way to go before being really sufficient to do 3x1080Px120Hz.  Multiple GPUs working together to tackle this Herculean rendering task need to exchange a massive amount of data.  The limited PCI-E lanes on x58, combined with the limited data rate of PCI-E 2.1 and the huge amount of PCI-E lanes 3 or 4 modern GPUs can consume, mean that there is a real potential benefit to PCI-E 3.0 in multi-monitor, multi-GPU scenarios.  Similarly, SSD’s continue to push forward in terms of absolute throughput on large sequential transfers.   They have evolved far enough to finally outstrip the 300MB/s bandwidth of a SATA2 channel.  While large sequential transfers certainly aren’t a common daily task, when you do need them it’s nice for them to be over as soon as possible.  SATA3 combined with a fast 6Gbps SSD are a great leap forward here.  Similarly, USB 3 has huge potential to unlock a whole new class of portable, removable, devices by providing last generations storage protocol bandwidth (4.8Gbps-ish) on the desktop. And of course the quad channel RAM and 10-15% boost clock for clock in execution performance is nice as well.  For me though, it’s that potential for improved GPU performance in my use case that was the real driver and also what I’ll be testing once the build is done.  So without any further delay, on to the build!

Introducing… The new cast!

RAM, Proc, SSD… Maximizing dollar value per square inch!
x79 Workstation… One of Asus’ thousands of x79 offerings

For this go around I decided on the following parts for the build:

  • CPU – Core i7 3960x Extreme – yeah yeah, I know the 3930 is the smart choice, is almost as good for half price, blah blah 🙂
  • RAM – Corsair Vengeance PC15000 (1866Mhz) 16GB kit (4×4)
  • Motherboard – Asus P9X79-WS – X79 Workstation board
  • SSD – Crucial M4 512GB SATA3

Right off the bat, it is worth talking about some of what is new on the SB era platforms.  One big thing is UEFI.  At first glance, it is easy to forget, or ignore, that UEFI is a significant departure from the legacy BIOS in many ways.  I won’t get into the full history here, but Apple had moved from OFW to EFI quite a while back, but on the PC side of things most vendors waited on the UEFI spec before attempting to migrate the standard.  The reason for this is likely the extreme complexity of the PC ecosystem (the diversity of devices, operating systems and legacy support) and the fact that within the PC ecosystem the BIOS is much more exposed to the end-user.  In the Apple world, with monolithic vendor control and “BIOS hacking” much less likely, the transition to the first rev of EFI was less potentially disruptive.

And now that UEFI is here in a big way, we can start to see the implications of the disruption.  To help give an idea of what we are talking about here, a few things to keep in mind:

  • Legacy BIOS presents devices to the operating system using device identifiers, the operating system formats storage devices with a MBR (master boot record) and a legacy partition table (all with well defined formats)
  • UEFI BIOS presents devices to the operating system using the UEFI data structure, the operating system formats storage devices with a GPT (GUID partition table) which contains the partitioning and boot data to enable bootstrap
  • Operating systems that do not support UEFI (XP) have no clue what a UEFI data structure is.  For these OS’s, pre-boot, a system with no legacy support has no boot devices
  • Operating systems that dosupport EFI configure themselves for it on initial boot and during install.  After that the configuration must maintain consistency for the system to be able to keep booting.  If you boot up initially with EFI, and install Windows 7 as EFI, and then later change the boot order to present legacy first, Windows won’t boot because the bootstrap loader won’t be on the area of the disk that the BIOS will hand off to.  In other words… INSTALL with EFI, STAY with EFI.  INSTALL with legacy, STAY with legacy.  You make this decision early, and once
  • Vista doesnt fully support EFI (visit MSFT for details here: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463140.aspx)

OK, that’s it for this installment.  Stay tuned for part II where I will detail the physical build process, some interesting case challenges/mods, and talk a little bit about my Windows 8 experience (spoiler: and why I’m still running 7!)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s