Part the First: Introduction
Momentum in the cloud space continues to accelerate.  Trends that have been clearly indicated for a few years now are starting to evolve rapidly.  In short, IT is dead, all hail Shadow IT.  Dramatic, but perhaps not really accurate.  The reality is we are solidly in the midst of a period of creative destruction.   This isn’t the shift from physical to virtual, it’s the shift from Mainframe to distributed systems.  What is the proof of this? There are two indicators that mark a period of creative destruction in technology: the emergence of dramatically new design patterns and a shift in sphere of control.  The latter is clear.  Core IT, which has always struggled in its relationship with the business it serves, is clearly now ceding authority to architects, analysts and developers in the business lines who are tied closely to profit centers and the value of whose work can be clearly articulated to the CEO.  This is a shift that has been brewing for quite some time, but the emergence of viable public cloud platforms has finally enabled it.  The former, the shift in design patterns, is flowing directly from this shift to a more developer centric IT power structure.  In contrast to the shift from physical virtual, which saw almost no evolution in terms of how applications were built and managed (primarily because it was core IT infrastructure folks who drove the change), the shift from virtual to cloud is bringing revolutionary change.  And if there is any doubt that this is vital change, just step back from technology for a moment and consider this… What business wouldn’t want a continually right sized, technology footprint, located where you need it when you need it, which yields high value data while serving customers, all for a cost that scales linearly with usage?  That is the promise that is already being delivered in public cloud by those who have mastered it.  The rub though, is in mastering it.  Programmatic control and management of infrastructure  (devops) isn’t easy and the tools aren’t quite there yet for developers to be able to not worry about it. 

Part the Second: Historical Context
Before we get to where things are headed, it’s worth revisiting how we got here. Taking a look back at history, “cloud” became a meaningful trend by delivering top down value.  The first flavors of managed service which caused the term to catch on were “Software as a Service” and “Platform as a Service”, both of which are application first approaches that disintermediate core IT.  SaaS obviously brings packaged application functionality directly to the end user who consumes it with minimal IT intervention.   Salesforce is the great example here, causing huge disruption in a space as complex as CRM by simply giving sales folks the tools they needed to do their job with a billing model they could understand and sell internally to the business.

PaaS sits at the other side of the spectrum and was about giving developers the ability to build and deploy applications without caring about pesky infrastructure components like servers and storage.  Google AppEngine and Microsoft Azure (and a strong entry from Salesforce in Force) blazed the trail here.  Ironically, though,  most developers weren’t quite ready to consume technology in this way, the shift in design thinking hadn’t occurred yet, and the platforms had some maturation to do (initial releases were too limiting for current design approaches while not bringing any fully realized alternative).  It was at this point that Amazon entered the market with S3 and EC2 (basically storage and virtual machines as a service), Infrastructure as a Service was born, in turn giving birth to confusing things like “hybrid cloud” and, as early players like Google and Microsoft pivoted to also provide IaaS, it looked like maybe cloud would be more evolution than revolution.

Part the Third: Shifting Patterns
Looking deeper though,  it’s clear that commodity IaaS is just a stop gap.  Even the AWS IaaS  portfolio reveals all sorts of services, both vertically and horizontally,  that are well beyond commodity infrastructure.   While the decade long shift from physical to virtual brought no real change in how servers were deployed and managed, or applications were built, instead just adapting existing processes and patterns to a faster deployment model, the shift to cloud has already brought revolutionary change in 3 years when it comes to design patterns and how resources are managed and allocated.  The best way to understand this is to consider how cloud design patterns compare to legacy design patterns.  The AWS approach is a bit of a bridge between past and future here and so provides a very easy to understand example:


A typical 3 tier app is illustrated above.  Web, app, data, scale out, scale up; simple stuff.  Even in the AWS example though,  which maps very closely to a traditional infrastructure approach by design, there are some dramatic differences.  The A and B zones in the diagram represent AWS availability zone diversity.   What this means is physically discrete data centers.  Now note that load balancing, a fairly straight forward capability, is spanning these availability zones.  Driving down a layer, we see traditional server entities, so the atomic units of the service are still Linux or Windows VMs, but note that they are not only “autoscaling “, but are autoscaling across the physically discrete datacenters.  The implications of this architecture is that this application actually consists of n Web and App nodes across two physical locations with dynamically managed access and deployment. Moving to the data tier, we can see similar disruption.  Rather than a standard database instance, we can see a data service scaling horizontally across the physical locations.  Finally, unstructured data assets aren’t sitting in a storage array, but rather are sitting in a globally distributed object store.

On prem, much of this is very difficult (geographic diversity, autoscaling, giant object store) and some of it is impossible (capacity on demand, database run as a service).  For the N-tier app use case there is no immediate impact to the design pattern (hence Amazons mindshare success in legacy apps), but the implications of the constructs are clear.  If you can dynamically scale infrastructure globally on demand, and maintain service availability, there is no need to limit your architectures based on the operational limits of traditional infrastructure.  This is where cloud design patterns, and the notion of “design for failure” (vs legacy design approach where you assume iron clad fault tolerant infrastructure) were born.  At this stage none of this is theoretical, nor is it just a Netflix or NASA game.  Even traditional enterprises have real solutions in production.  How do we operationalize all of this though? That’s been the really hard part so far.  If there is one advantage to traditional infrastructure, it was that there is deep tribal knowledge and mature tooling to manage it.  Cloud platforms really are infrastructure as code and expect management via API.  Old tools haven’t caught up,  or no longer apply, and new tools have steep learning curves or require moderate development skill.  This is why we have seen the rise of devops and the explosion in interest for platforms like Chef, Puppet, Mulesoft,  etc.  It’s a tough problem though because ultimately devs don’t won’t to inherite ops  (this would count as a huge cloud downside for them), and it’s not clear that ops folks can reskill quickly enough, or at all, to transition.  In short, there is currently a vacuum and most folks are betting that the space will hash out quickly and that the tools will evolve before there is a real need to solve this from the customer side.  Personally I see most IT shops investing very cautiously here and “buying operate”, even as they shift real production to cloud, until the directional signals are more clear.

Part the Fourth: The Topic at Hand
So what are the directional signals? Consider; why should we limit ourselves to the constructs of legacy infrastructure if the inherent flexibility of the service can free us from them?   The answer here is that we shouldn’t and the directional signals are proving this.   So what are these technolgies headed, what do they mean, and why?  I’ve chosen a few of the big ones to explore.  Before we get into specifics let’s spend some time thinking about what is really required to get to “self operating infrastructure” and what might be missing from the architecture presented above. 

The Trinity
Where the proverbial rubber meets the road are of course are basic resource units.  We need bytes of RAM to hold data and code being currently executed, we need bytes of long term storage to hold them at rest, we need compute cycles to process them, and we need network connectivity to move them in and out.  Compute, network and storage: these abstracts remain the holy trinity and are so fundamental that nothing really changes here.   Today, access to these resources is gated either by a legacy operating system (for compute,  network, and local storage) or an API to a service (for long term storage options).  Unfortunately,  the legacy OS is a pretty inefficient thing at scale. 

A Question of Scale…
Up until very recently, traditionally operating systems really only scaled vertically (meaning you build a bigger box).  Scaling horizontally, through some form of “clustering” was either limited to low scale (64 servers let’s say) and “high availability” (moving largely clueless apps around if servers died), or high scale by way of application resiliency which could scale despite clueless base operating systems (Web being the classic example here).  This kind of OS agnostic scaling depends on lots of clever design decisions and geometry and is a fair bit of work for devs.  In addition to these models, there were some more robust niche cases.  Some application specific technologies were purpose built to combine both approaches (example being Oracles “all active shared everything” Real Application Clustering).  And finally the most interesting approaches were found in the niches where folks were already dealing with inherently massive scale problems.  This is where we find High Performance Computing and distributed processing schedulers and also, in the data space, massive data analytics frameworks like Hadoop.

Breaking down the problem domain, we find that in tackling scale, we need some intelligence that both allocates and tracks the usage of resources, accepts, prioritizes and schedules jobs against those resources, stores, manages and persists data, and provides operational workflow constructs and interfaces for managing the entire system.  There is no “one stop shopping” here.  Even the fully realized use cases above are a patchwork tapestry of layered solutions.

…and Resource Efficiency
Not only are the traditional OS platforms lacking in native horizontal scaling capabilities, but getting back to our resource trinity, they aren’t particularly efficient at resource management within a single instance.  Both Linux and Windows tend to depend on the mythical “well behaved app” and are limited in their ability to maximize utilization of physical resources (hence why virtualization had such a long run – it puts smarter scheduling intelligence between the OS and the hardware).  But how about inside that OS? Or how about eliminating the OS all together? This brings us nicely to a quick refresher on containers.  The point of any container technology (Docker, Heroku, Cloud Foundry, etc) is to partition up the OS itself.  Bringing back a useful illustration contrasting containerized IaaS to Beanstalk from the container entry, what you get is an architecture that looks like this:


The hypervisor brokers physical resources to the guest OS, but within the guest OS, the container engine allocates the guest OS resources to apps.  The developer targets the container construct as their platform and you get something forward from, but similar to, a JVM or CLR.

There is still a fundamental platform management question here though.   We now have the potential for some great granular resource control and efficiency, and if we can eventually eliminate some of these layers a huge leap forward in both hardware utilization and developer freedom, but we really don’t have any overarching control system for all of this.  And now we’ve found the eye of the storm.

A War of Controllers
Standing in between the developer and their user, given everything discussed above, remains an ocean of complexity.  There is huge promise and previously unheard of agility to be had, but the deployment challenge is daunting.  Adding containers into the mix actually increases the complexity because it adds another layer to manage and deploy.  One way to go is to be brilliant at devops and write lots of smart control code.  Netflix does this.  Google does this to run their services, as do Microsoft and Facebook.  Outside of the PaaS offerings though,  you only realize the benefit as a side effect when you consume infrastructure services.  That’s changing however.  There is pressure from the outside coming from some large open source initiatives and this is causing an increased level of sharing and,  quite likely ultimately some level of convergence.   For now, the initiatives can be somewhat divided into top down and bottom up.

The View from the Top
Top down we’re seeing the continuing evolution of cloud scale technologies focused specifically on code or data.  Googles Map Reduce, which became Hadoop, is a great early example of this.  Hadoop creates and manages clusters on top of Linux for the express purpose of running analytics code against datasets which have an analytics challenge that fits into the prescribed map/reduce approach (out of scope here, but great background reading).  Other data centric frameworks are building on this.  Of particular note is Spark, which expands the focused mission of Hadoop into a much broader, and potentially more powerful, general data clustering engine that can scale out clusters to process data for an extensible range of use cases (machine intelligence, streaming, etc). 

On the code side, the challenge of placing containers has triggered lots of work.  Google’s Kubernettes is a project which aims to manage the placement of containers not only into instances, but into clusters of instances.  Similarly, Docker itself is expanding the native capabilities beyond single node with Swarm which seeks to expand the single node centric Docker API into a transparently multi-node API.

…Looking Up
Bottom up we find initiatives to drag the base OS itself kicking and screaming into the cloud era.  Or, in the case of CoreOS, replace it entirely.  Forked from Chrome, CoreOS asks the question “is traditional Linux still applocanle as the atomic unit at cloud scale?”  I believe the answer is no, even if I’m not committing to betting that the answer is Core.  In order to be a “Datacenter OS”, capabilities need to be there that go beyond Beowulf.  I’m not sure if it’s there yet, but CoreOS does provide Fleet which is native capability for pushing components out to cluster nodes.

Taking a less scorched earth approach, Apache Mesos aims more at being the modern expression of Beowulf.  More an extensible framework built on top of a set of base clustering capabilities for Linux, Mesos is extremely powerful at orchestrating infrastructure when the entire “Mesophere” is considered.  For example Chronos is “Chron for Mesos” and provides cluster wide job scheduling.  Marathon takes this a step farther and provides cluster wide service init.  Incidentally Twitter achieves their scale through Mesos, lest anyone think this is all smoke and mirrors! And of course the logical question here might be “does Mesos run on CoreOS?” And the answer is YES, just to keep things confusing.

What you Don’t See!
As mentioned above, Google, Microsoft, Amazon and Facebook all have “secret”, or even “not so secret” (Facebook published their orchestration as OpenCompute) sauce to accomplish all, or some, of the above.  Make no mistake… This space is the future of computing and there is a talent land grab happening.

And the Winner Is!
Um… sure!  Honestly, this space is still hashing out.  There is a lot of overlap and I do feel there will need to be a lot of consolidation.  And ultimately, the promise of cloud is really just bringing all of this back to PaaS, but without the original PaaS limitations.  If I’m a developer, or a data scientist, I want to write effective code and push it out to a platform that keeps it running well, at scale, without me knowing (or caring) about the details.  I’m buying an SLA, not a container distribution system, as interesting as the plumbing may be!

Chat  —  Posted: May 25, 2015 in Computers and Internet

UPDATE: 6/12 – Huzzah!  The latest build of Windows 10 beta (fbl_impressive) fixes the issue!  Relief is on the horizon!

There’s been lots written about the relative merits of the Metro UI being applied to the traditional desktop OS.  This entry isn’t about coming to that discussion late to the party.  If Windows 10 and Server ’12 tell us anything, it’s that the design aesthetic, at the very least, is here to stay for a bit and will continue to evolve.  Unfortunately, along with the stylistic changes introduced with Metro, came a particularly annoying bug which impacts an admittedly niche (but popular) corner case.  If you’re a PC gamer, who games on a laptop, and runs a high resolution desktop (think QHD), then you know exactly where this is headed.  When the desktop is running native resolution (basically all the time), games will not scale to full screen unless they are also running native resolution (and unless you have a Titan X equipped laptop, this is “never”).  Here is the effect:


To experience this phenomenon, a few conditions have to be met:

  • Windows 8+
  • Touchscreen panel
  • Running native res
  • Trying to run lower than native res under DirectX

I will also throw in these two, but with a clarifier:

  • Running IntelHD (although this is basically all laptops with few exceptions)
  • Running NVIDIA Optimus (the problem does also occur in pure Intel setups.  Nearly all NVIDIA laptops are Optimus, so it’s not clear to me if a pure NVIDIA setup would be immune)

Potential exceptions might be rigs running the full desktop NVIDIA parts as the only GPU, or AMD parts leveraging the APU and/or mobile ATI parts.  What happens in a nutshell is this:

  • The Intel drivers do not allow you to independently set scaling behavior by resolution (so you can’t go into the control panel and say “for 720p, always scale”)
  • There is no “default scaling behavior” that you can set – if the desktop is at native resolution, “maintain aspect ratio” is hard set since it is the only setting that makes sense for native res
  • NVIDIA cedes control of scaling to Intel under Optimus since, I believe, the NVIDIA part has no physical path to the panel (it passes through the Intel and relies on it for panel setup)

This problem has been lingering for years.  A quick web search for “cannot run full-screen non-native res” will show posts as old as 2012.  The work around thus far has been one of two things:

  • Run the desktop in something below non-native res (this sucks)
  • Set the scaling option to “scale up” under the Intel drivers

I recently switched to Windows 10 CTP and discovered that this problem persists.  The workaround, however, stayed the  same.  Until 5/15.  The latest updates to Windows 10, the IntelHD Windows 10 driver, and the NVIDIA Windows 10 driver, introduced a new dimension to the problem.  In the absolute latest Windows 10 and driver bits for Intel/NVIDIA, the above work around no longer works. So are we all doomed to a life of postage stamp gaming?  No!  There is an actual work around (well actually I’ll discuss two).

First, the easy button.  Disable touch features in Windows.  For whatever reason, the touch panel HID driver is the actual root cause of this issue.  It can be disabled in Device Manager:


The other work around is functional, but can be a bit onerous for anyone who has a large catalog of games.  The solution here is to utilize the Intel Profile Manager capability to trigger a res switch when a game is run.  That can be found in the Intel HD control panel under “Profiles”:


To set this up, do the following:

  • First want to set your desktop resolution to the resolution you run full screen gaming under.  So for example, if you want the game to run 1080p, change your current default res to 1080p and set scaling to “scale fullscreen”.
  • Go into profiles (note “current settings” is what will be applied to the profile, this is why we set the resolution above) and set “trigger” to “application”
  • The “display” checkbox will be deselected, but you can reselect it
  • Browse to and select the appropriate EXE
  • Save the profile as something meaningful (ie: Crysis)
  • Rinse/repeat for all games

If you only have a few favorite titles that need to run at lower than native res when fullscreen this works fine.  If you have a big catalog, though, just disable the touchscreen.  Here’s hoping that this get’s fixed before we move to “Windows as a service”!

Gettin’ FREAKy

Posted: March 7, 2015 in Computers and Internet

Don’t let the ridiculous, “stretches the definition to the breaking point”, acronym fool you, that’s just marketing after all, FREAK is serious business.  For those not yet aware, FREAK is an exploit designed to take advantage of a critical vulnerability in SSL/TLS.  For anyone who just said “uh oh”, that’s the spirit!  Better grab a coffee.  The “Factoring Attack on RSA Export Keys” (I know, I know… This doesn’t remotely spell “freak”. I told you the acronym was agonizing) is complex to implement, and requires a fairly sophisticated attack structure, but incredibly organized and sophisticated attackers are hardly in short supply in today’s threat environment.  Before getting into how the exploit works. Some background is in order.

First is, of course, the “what” and “how” of SSL/TLS.  Secure Socket Layer/Transport Layer Security, in a nutshell, are standard mechanisms for creating encrypted network connections between a client and a server.  SSL and TLS take care of the encryption piece, agreeing on a method of how to encrypt the data (cypher), generating and exchanging keys, and then performing the actual encryption/decryption.  Network transport depends on an encryption aware protocol like HTTPS or FTPS .  Here is a nice detailed flow diagram that illustrates the conversation between the client and server (courtesy of IdenTrustSSL):

If you take a close look at the above flow, you’ll notice that there are really  two encryption stages.  Steps 1 – 3 are a standard PKI (Public Key Infrastructure) negotiation whereby a server is configured with a certificate (identifying it and providing authenticity assurance) and a public/private key pair.  When a client comes and says “hello!” (plus a random number… more on that later), the server sends on over its certificate and public key (and another random number…more later). The client then decides to trust the certificate (or not and break the connection), and then sends over a new secret computed using the two random numbers we covered above, encrypted with the servers public key.

The server decrypts this with its private key, takes the secret generated by the client and, combing it again with the random numbers, generates a new key which will now be used to secure the channel for the duration of the connection.

Astute readers will notice that this means SSL and TLS are actually multi-layer encryption models utilizing both asymmetric encryption (separate public key and private key) for quick and easy implementation (nothing needs to be shared between a client and a server up front), and symmetric encryption (a single key for encryption and decryption that both sides know… A much faster method, but one which requires pre sharing).  It is the best of both worlds.  The channel setup efficiency and low level of required preconfigusation characteristic of asymmetric encryption plus the speed and added strength of symmetric.

In PKI methodology, the algorithm which generates keys should not allow factoring the private key from the public.   To achieve a reasonable level of security, dual key systems require a very high key strength.  Generally 1024 bits or greater.  Symmetric key schemes can get away with much lower strength – 128 bit or 256 bit being reasonable.  What does all of this mean?  Well let’s take one more step back and just review quickly what encryption really is (diagram lifted from PGP docs):

The above illustrates symmetric encryption, but the principal is always the same.  There is a message that two parties want to share.  They want it to be a secret so anyone who might intercept it or otherwise eavesdrop won’t understand.  For time immemorial messages have been kept secret using codes.  Encryption is a code.  The message is put through some fancy math, using a big complex number as a constant (the key) and a scramble message is created.  To descramble it, the message, the method (cipher) and the decryption key is needed.  So when we say that symmetric relies on a 128 or 256 bit key, that’s the size of the numberical constant being used as a key (a big number). There is a lot (a LOT) more complexity here, but this is enough for the context of this entry.

Now obviously, there are many, many methods for actually encrypting data (the fancy math algorithm referenced above), and there are varying key strengths that can “work”.  Typically it’s all a trade off between performance (compute overhead), which means cost, and security.

If the message does get seized, however that was accomplished, the data thief has a bunch of scrambled nonsense.  But as with any code, it is possible to “brute force” decrypt the message.  Basically try every possible value as a key.  The catch is, with a large enough key, there just isn’t enough computing power available to try all of the combinations in a reasonable timeframe.  At least there hasn’t been until now.

Enter… The cloud.  I am a true believer when it comes to cloud.  That said, I recognize than any great good can also be twisted to serve evil.  In the case of cloud, nearly infinite compute capacity can be purchased on demand and paid for as an hourly commodity.  It’s absolutely standard to day to model any computing task as a cost per hour directly mapped to a cloud service provider. And the resources can be provisioned programmatically.  What this means is that brute force operations that would have taken a desktop PC 100 years, can now be carried out across 10,000 PCs in 10 days if you’re willing to spend the money.  Still expensive, still not worth it.  At least at 100 years.  Of course with the proliferation of bot nets (or “dark cloud” as I like to call it), there may be no cost at all.  But let’s leave that aside for now.  What happens if the encryption is weak?

Enter… The export rules.  Way back when the U.S. Government made it illegal to export strong encryption.  Full stop.  The US was, of course, also pretty much defining the technology the world was adopting.  So what was considered “too strong”?  Anything over a 56bit symmetric key system or a 512 but asymmetric.  Egads!  Over time this has strengthened (since it was ridiculous) and of course an admin could always simply force the strongest encryption (of course that would mean geographically load balancing traffic to keep non US clients outside of the US)

With this background in mind, what FREAK does is take advantage of a vulnerability in SSL/TLS (both client and server) which allows a bad packet to be injected into the up front client/server “hello!” exchange, selecting the weakest level of encryption.  What this means is that, in the case of any servers which still have support enabled for the earliest “exportable” key strength (which turns out to be a LOT of servers), the key strength drops to 512bit.

Now combine this with cloud capacity (dark or legit) and you have about an 8 hour, $100, computing challenge to brute force a private key from a server since you now have a nice packet capture at 512 bit strength to take offline.

Wait! Take offline? Why would that work? Well.  Here is the thing.  The asymmetric key pair hangs around a long time.  Sometimes like… Forever! Literally.  Many servers only refactor on reboot and thanks to the miracles of high availability and “pets” based architecture approach (vs “cattle” in cloud design), that web front end may be online for years before rebooting.

So as you probably gathered, this requires a man in the middle:

This can be as easy as a bad actor on public wifi, or as complex as a compromised ISP router in the path of a high value server.  All very feasible to accomplish for a well funded black hat org.

Consider a real world potential scenario to imagine the possibilities… High value targets like Amex, Citibank and the FBI are vulnerable and have been for ten years (meaning export grade encryption is enabled and selectable).  On the client side, nearly every platform is vulnerable! The last piece is that what used to be the hard part, compromising the network path, has become easy thanks to public wifi ubiquity.  So let’s combine and imagine…

1) a popular public hotspot with weak security (WEP) and the key published on the register (or even not)

2) you hang out all day and capture wifi traffic

3) you either actually have the key, or you easily break it offline

4) with a nice unbound, shared media, network breached, you hang around and look for connections to the high value targets

5) you compromise the channel when you find one and start capturing weak encrypted traffic

6) traffic flow in hand, you brute force factor the key pair using about $100 of EC2 time

7) you are now free to watch and manipulate all traffic to the site until they change the key.  Which may never happen.

So what are the implications if the exploit is pulled off? Well… The attacker has the private key.  This means two big scary things:

1) until the keys are refactored they can instantly decrypt any traffic they can capture. Suddenly the expense of compromising an ISP path router just got a lot more realistic!

2) they can inject anything they want into an intercepted conversation

So do we turn off the interwebs??? Yes! Well no.  But this is a big one and a ton of high profile sites are impacted.  In my opinion a few things must happen:

1) weak encryption needs to be retired even as a configurable option.  And export rules need to be gone.  Strong encryption everywhere.  Let the NSA build bigger brute force machines

2) servers need to be updated ASAP.  Force strong encryption, disable export grade, and patch

3) keys need to be refactored multiple times per day.  Yes this is computationally expensive.  There are matter ways to do this than “bigger web server” though.  Rearchitect. Design for fail.  HSM. The truth is out there.

4) clients need to be patched as soon as patches are ready.  Linux, OSX, Windows, IE, Chrome, Firefox, IOS, Android.  Yikes!

Can anything be done in the meantime? Mainly be careful with public wifi (this is just a rule really).  Stick with authenticated public wifi using stronger encryption or VPN.  VPN is just a “from/to” secure channel to bridge networks, so it isn’t a panacea here, after all, you can’t VPN direct to a public server but it can help mitigate some risk exposure until the ecosystem is corrected.

Fun times!

Even as enterprises just start to wrap their minds about how cloud in general will transform the way they operate, the goal post is already moving forward.  If anyone out there has been looking for a final proof-point that the sphere of control has officially passed to the developer, this recent shift is all you need.  What am I on about this time and what the heck does that title mean?  A little history is probably in order.

In the beginning, there were Mainframes, and they were good.  Developers wrote code and dropped it into a giant job engine which then charged them for the time it consumed.  Paying too much?  Well time to optimize your code or rethink your value proposition.  This worked for quite a while, but as technology evolved it inevitably commoditized and miniaturized and as a result became far more available.  Why wait in a queue for expensive processing time, purchased from a monopoly, when you could put it on your desk?  The mini-computer revolution was here, quickly giving way to the microcomputer revolution, and it was also all good.

Computers stranded on desks aren’t particularly useful though, so technology provided an answer in the form of local area networks, which quickly evolved into wide area networks, which ultimately enabled the evolution of what we today call the Internet.  All of these things were also good.  As technology continued to commoditize, it became a commonplace consumer product like a car or a toaster.  Emerging generations were growing up as reflexive users of technology, and their expectations were increasingly complex

To keep up, companies found they had to move fast.  Faster than IT departments were able to.  Keeping track of thousands of computers, and operating the big expensive datacenter facilities they lived in, was certainly an “easier said than done” proposition.  By the mid 2000s, rapidly evolving agility in software came to the rescue of what was, in essence, a hardware management problem.  Virtualization redefined what “computer” really means and operating systems became applications that could be deployed, removed and moved around far more easily than a physical box.  This was also good, but in reality only bought IT departments a few years.  The promise of virtualization was never fully exploited by most since the toughest challenge is almost always refining old process and , at the end of the day, there were still physical computers somewhere underneath all of that complex software.

In the last years of the last decade, a new concept called “cloud” grew out of multiple converging technologies and was the catalyst that literally blew the lid off of the IT pressure cooker.  If you think about how technology is consumed and used in any business, you have folks who look after, translate and then solve business problems (business analysts, developers, and specialists) and then you have the folks who provide them with generic technical services to get their work done (security folks, operations and engineering folks and support professionals).  By the time “cloud” arrived in a meaningful way, the gap between technology folks in the lines of business, and the technology folks in core IT, had grown to dangerous proportions.  In short, the business lines were ready for new alternatives.  From cloud providers they found the ability to buy resources in abstract chunks and focus primarily on building and running their applications.

This trend has transformed IT and we are in the midst of its impact.  The thing is, though, that technology adoption cycles at the infrastructure layer (once reduced to glacial pace by the limits of core IT adoption abilities) will now rapidly accelerate.  Developers are expecting them to since the promise of cloud is to bring them all of the efficiencies of  emerging technology with none of the complexity.

This is why, barely 2 years into the shift to cloud design patterns and cloud service consumption and operation models, we are already seeing a shift to containers (and also “SDDC”, but that’s a topic for another day)  What do these technologies mean to the new wave of IT folks though?  Well first let’s take a look at what we’re actually dealing with.  I will focus on two rival approaches.  The first is Amazons “Elastic BeanStalk”, which is how AWS answers the “Platform as a Service” question, and the next is the traditional “Platform as a Service” approach, and more recently the “Containers as a Service” (for lack of a better term) approach being provided by Google and Microsoft.  To kick things off, a quick diagram:

PaaS So what the heck is this about?  A few quick definitions:

  • Code – as implied, this represents a developer with a set of Ruby, Python, Java, or .NET code ready to be deployed onto some mix of compute, networking and storage
  • API – in the world of cloud, where developers rule, the API reigns supreme.  At code complete, developers will look for an API to interact with in order to deploy it
  • Orchestrator – if YOU don’t have to care about pesky things like servers and disks, SOMEONE must right?  Well that someone in this case is a smart piece of code called an orchestrator that will make some decisions about how to get your code running
  • Fabric Controller – the true secret sauce of cloud.  The brilliant code which allows mega providers to out-operate IT departments 1000 fold.  Think of the fabric controller as the fully realized Utopian dream state of virtualization, and “software defined everything”.  The fabric controller is able to manage a fleet of servers and disks and parcel their capacity out to customers in a secure, efficient, and highly available way that still turns a profit.
  • Instance/VM – Amazon calls them instances, everyone else calls them Virtual Machines.  It’s a regular operating system like Windows or Linux running on top of a hypervisor which in turn runs on top of a physical server (host) – OS as code.  The fabric controller monitors, provisions, deprovisions and configures both the physical servers and the virtual servers that run on top of them.
  • Container – the guest of honor here today.  The ultimate evolution of technology that started with the old concept of “resource sharing and partitioning” back in the Sun Solaris days and continued with “application virtualization” like SoftGrid (today Microsoft App-V).  The same way a hypervisor can isolate multiple versions of an operating system running together on one physical machine, a container controller can isolate multiple applications running together in one operating system.  Put the two together and you have the potential for really good resource utilization

With the definitions out of the way, let’s take a look at how AWS does things with BeanStalk.  Very simply, BeanStalk ingests the code that you upload and takes a look at the parameters (provided as metadata) that you have submitted with it.  The magic happens in that metadata since it is what defines the rules of the road in terms of how you expect your application to operate.  Lots of RAM, lots of CPU, not much RAM, more CPU than RAM… This is the sort of thing we’re talking about.  The BeanStalk orchestrator then goes ahead and starts making provisioning requests to the fabric controller which then provisions EC2 resources (instances) accordingly and configures cool plumbing like autoscaling and elastic load balancers to allow the application to graceful scale up, and down, and function.  Without caring about too much, you (as the developer), assuming your code works and you defined your parameters well, are in production and paying for resources consumed (hold onto that thought) immediately.

OK, that makes sense.  It’s basically “autoprovision my infrastructure so I don’t have to think about it”.  The developer dream of killing off their core IT counterparts.  Microsoft explored the same concepts an age ago with the Dynamic System Initiative and the Software Definition Model ages ago and ultimately (sort of) evolved them into Azure.  So how is PaaS different? And for that matter what the heck is “Containerized Infrastructure”?

Platform as a Service (PaaS), can be thought of as the final endgame.  Ironically, we got their first.  Google got the all rolling with AppEngine back in 2008. Microsoft actually lead with PaaS in cloud, a couple of years later in 2010, but for various reasons (a not so hot initial implementation, a customer segment that wasn’t ready, a hard tie-in at the time to .NET) they had to quickly backpedal a bit in order to get traction.  In the meantime Amazon was piling on marketshare and mindshare year over year with pure Infrastructure as a Service (basically “pay as you go” virtual machines) and Storage as a Service plays.

What PaaS provides, ultimately, is something akin to the original Mainframe model.  You push code into the platform, and it runs.  It reports back on how many resources you’re consuming and charges you for them.  Ideally it is the ultimately layer of abstraction where cumbersome constructs like virtual machine boundaries or where things are actually running are fully obfuscated.  No PaaS really works quite that way though.  What they really do is utilize a combination of containers and virtual machines.  This brings us to today where “container solutions” like Docker are gaining lots of traction on premises and Google and Microsoft are both educating developers on being container aware in the cloud.  Google has added the Docker compatible, Kubernetes based, “Container Engine” to their original “Compute Engine” IaaS offering and Microsoft has expanded their support for “Windows Server Containers” to include interoperability with Docker as well.

In the container model, a developer still submits code and metadata through an API, but what happens next diverges from what is happening in BeanStalk.  The orchestrator has more options.  In addition to asking for virtual machines from a fabric controller, it can also create a container on an existing virtual machine that has available resources.  The way the virtual machines maximize resource utilization of a host, the containers maximize resource utilization of each virtual machine.

Now if you’re thinking to yourself “why should I care?” the congratulations!  You get the cloud gold star of the day! I mean if we think about it, the point of cloud is that I really don’t care about what’s going on with infrastructure, so why should it matter to me how the provider is serving up the resources?  Well there are two primary reasons:

  • Economics – the whole point of this is doing more, and doing it more quickly, while spending less money.  This is why cloud is unstoppable and CFOs love it.  Despite protests to the contrary, it is proving cheaper than the legacy IT approach (no one is shocked by this except those with a legacy IT bias who have never deeply studied enterprise TCO).  With BeanStalk, you have a fairly resource heavy approach.  The atomic unit for scaling your app is a virtual machine.  As your app grows, it needs to grow in instance based chunks and you will pay for that in instance hour charges.  In theory a containerized back-end is more resource effective and should be more cost efficient.  In reality this will vary widely by use case which brings us to the second point…
  • Application Architecture – cloud design patterns are a fascinating turn around for development.  Enterprise developers spent years, and technology providers built a plethora of technology and process, in order to create invulnerable platforms for code.  Fault tolerance and high availability are industries because of this.  Cloud basically throws all of that away.  The mantra in cloud is “infrastructure is a disposable commodity” (this is the brilliant “Pets vs Cattle” analogy coined by Gavin McCance at CERN back in ’12)  The idea in cloud design is to design for fail.  You build resilience and statelessness into the application architecture and rely on smart orchestration to provide a consistent foundation even as individual components come and go.  Containers are a natural extension of this concept, extending control down into each virtual machine.  Container architectures can allow something like this:


Obviously a hypothetical example, but a hint of what ultimately could be possible.  If you think along the lines of app components being atomic units, rather than an OS, you can start to think about which components might benefit from co-location on a single machine.  Intra-OS isolation can potentially allow scenarios that traditionally might have been impractical.  So as you map out an architecture plan, you can begin to group application components by where they fit in the overall solution and allow the container orchestrator to group them accordingly.  In the example above we have front end-processing co-mingling with the web tier while app logic code co-mingles with data.  Again, this isn’t the best example, but it is definitely easy for illustration.  Personally I think we are at the dawn of what can be accomplished with this next move forward.  Now if enterprises can just catch up with the last one.  They’d better hurry because, before you know it, containers direct on hypervisor will be here to really mix things up! 

Disasters are an unavoidable reality in life.  Bad things eventually will happen.  Unfortunately, it is human nature to defer thinking about the worst case.  When disaster strikes, we are often unprepared.  This is true in life, and it is perhaps doubly true in IT.  Now there is a good argument to be made here in favor of disaster avoidance given this fundamental quirk in human nature, and “design for failure” is certainly a core tenet of next generation cloud design, but for today we’re going to keep the focus on how things are today.  The bulk of the applications that keep enterprises up and running today are not particularly resilient inherently.  They are often monolithic, or have complex interdependencies on other systems and on persistent state.  The best way to keep these applications alive is to build out the most highly available, and fault tolerant, infrastructure foundation possible below them.  But once again, eventually even the strongest house can collapse.  When the worst happens, are you prepared?

For most IT shops, the honest answer to this question is “probably not”.  Or maybe the slightly better “it depends”.  The fact is, IT budgets are tight in terms of both time and capital, and a thorough disaster recovery plan is expensive.  And even where a plan is in place, it isn’t much good unless it is regularly tested to ensure it works.  Unfortunately if it doesn’t work, this can lead to more cost and downtime which are two things IT must avoid like the plague.  As a result, it isn’t a surprise that disaster runbooks often collect dust on a shelf, and cold sites stay cold, until lightning strikes and, once that happens, it is too late.  Over the generations this sordid mess has left many IT pros wondering “why can’t there be an easy button?!”  Well the good news is we are entering an era when “a DR easy button” is no longer a flight of fancy.

So what is the magic that has brought us to this point?  How is it possible that such a difficult challenge can suddenly have become easier?  It probably comes as know surprise that the key lies in the convergence of both virtualization and the cloud.  Software Defined Everything means that all infrastructure can now be defined by code and code, by its nature, is flexible.  On the other side of the equation, Infrastructure as a Service means that a secure, highly scalable, and physically remote site, billed by the hour and programmatically provisioned, is standing by ready for duty.  Bridging these two worlds, however, and developing a true disaster recovery architecture and process, is still a challenge.  Most IT shops don’t have the level of on premise maturity, or devops core competency, to build something like this out.  That’s where VMware vCloud Air comes into play.

From day 1 VMwares pitch has been that their differentiated value lies in enabling a “true hybrid cloud” whereby on premise technology seamlessly extends into a public cloud service.  It stands to reason then, that a disaster recovery service for VMware environments would be a no brainer and, sure enough, one of the first value added services released by VMware for vCloud Air is the vCloud Air Disaster Recovery as a Service offering.  I recently had the chance to give the service a whirl and thought it would be interesting to document the process and provide some insight on how it went.

The first step of course is to subscribe to the service.  I won’t go into detail on that process here, but the first step would be to contact one of the great VMware resellers, or contact VMware directly, and ask for info on the Disaster Recovery Service.  More info on the service itself, and getting started, can be found on its landing page.  Once you’ve subscribed, you’ll get a welcome aboard email with your credentials, and can sign in and start using the service.  In addition, you will be provided a link to the vSphere Replication Appliance, to deploy on premise, which is the core engine of the service offering:


Once up and running, logging into the core vCloud Air service is easy.  Just head over to the service URL from any web browser.  The initial login prompt requires the credentials provided in the welcome aboard email:

Screenshot 2014-11-12 22.04.16

The initial landing page provides a great, and simple, over view of all cloud resources organized by “Virtual Datacenter”.  In my case I have access to multiple environments including a “Dedicated Cloud” which has been partitioned into multiple vDCs.  Disaster Recovery capacity is denoted by the blue lightning bolt cloud:

Screenshot 2014-11-12 22.10.10

To get started, we need to configure our on-premises vCenter installation to prepare it for integrating with the service, but as a pre-requisite we should collect some info that we will need later.  We are after two things, our Cloud Provider Address, and our Organization Name.  These two items can be found by clicking into the Disaster Recovery virtual datacenter, and then clicking on the “Cloud Provider Address” link on the right:

Screenshot 2014-11-12 22.13.48

The next step is to get a hold of the appliance itself.  It can be found inside of My VMware.  After logging in, click on “My Downloads” and then “All Downloads”.  Next search for “vSphere Replication Appliance”.  The current version as of this entry is 5.8 and the link to the OVA should be at the top.  If you select the right entry, you will see this download page from which you can download either an ISO or ZIP container:

Screenshot 2014-11-13 13.46.07

Once downloaded, either mount the ISO or extract the ZIP to a local folder, and launch the vSphere Web Client from your vCenter server.  The replication appliance is a standard OVF install and detailed instructions step by step are provided by VMware here.  Once the OVF has been deployed and configured, log out of the Web Client, close the browser tab, and launch a new session.  If the installation went well, a new add-on icon for vSphere Replication will be found on the Home tab:

Screenshot 2014-11-12 20.38.16

Click into the vSphere Replication add-on and select the Home tab.  Highlight the vCenter entry and click the “Manage” icon:

Screenshot 2014-11-12 20.39.51

From the Management interface, select the vSphere Replication tab.  This is where the foundation setup is configured for replication:

Screenshot 2014-11-12 20.40.08

Select Target Sites to start the process of adding the vCloud Air service as  replication target:

Screenshot 2014-11-12 20.40.19

To kick off the workflow, click the “Add a Provider Target” icon.  This is the icon to the immediate left of the refresh arrow.  The “Connect to a Cloud Provider” wizard will start:

Screenshot 2014-11-12 20.40.29

At this point we need the two critical pieces of information we gathered earlier.  For “Cloud Provider Address”, enter the provider address we retrieved from the vCloud Air service.  For “Organization Name”, enter the Organization Name retrieved in that same step.  For User Name and Password enter the vCloud Air credentials and click next:

Screenshot 2014-11-12 20.42.36

Connecting to the vCloud Air API endpoint will trigger a certificate warning.  It can either be accepted here, or the certificate can be downloaded and installed on the vCenter server proactively to prevent the warning.  Either approach will work:

Screenshot 2014-11-12 20.42.43

Next we select our Virtual Datacenter.  Any vCloud Air DRaaS subscriptions will be available for selection here.  In our case there is only the one subscription.  Select the Virtual Datacenter being setup and click next:

Screenshot 2014-11-12 20.43.00

Before we commit the configuration, we have one final opportunity to verify the settings:

Screenshot 2014-11-12 20.43.09

At this point we have our replication target configured, but looking closely we can see there appears to be an issue. Under status we see an icon that appears to be a broken network connection and, luckily, some explanatory text indicating that we are missing our network settings:

Screenshot 2014-11-12 20.43.39

Very handily, this is a clickable icon.  Clicking the network settings error will invoke the “Configure Target Networks” dialogue box.  It’s worth taking a moment to explain what this setting means.  In a disaster recovery scenario in general, a source server (the production state) is paired with a target server (the disaster recovery state) and the two servers are kept synchronized via replication.  The two servers share a configuration by definition, but obviously live on two physically discrete networks.  This is true regardless of whether the servers are physical or virtual (and whether the underlying network is physical, or virtual).  The network configurations may also be duplicated (same IP space on both sides), but this isn’t mandatory.  Obviously if the network configurations differ, then the server will need reconfiguration before it can function if it has a statically assigned IP.  What we’re configuring in this step of the setup is the definitions for both the Test and Recovery networks in a DR event.  These are network definitions that have already been setup in vCloud Air.

To setup our DR networks, we need to return to the vCloud Air UI.  Clicking in to the DR Virtual Datacenter we can configure our virtual resources.  Click on the Networks tab to show the networks that have been defined for DR.  As you can see we have a few already defined in our case, but adding a new one is very easy.  Click “Add One” to get started:

Screenshot 2014-11-13 15.07.54

To create a new network definition we just have to provide some basic parameters.  First we provide a name and description to identify the network entry.  Next we enter the IP details – gateway, subnet mask and IP range.  This configuration can match the vCenter port group at the source, or differ.  Remember if it differs the VM will need to be reconfigured after a failure event.  The service does not provide guest configuration capabilities (unlike SRM for example).  Multiple networks can be created and any network definition can be selected for use as either the “Test” or “failover” network in the vSphere Replication configuration:

Screenshot 2014-11-13 15.08.15

Returning to vCenter, we can now apply one of the defined network entries for each replication configuration category.  The “Test” network refers to which network the VM will be attached to during one of the formal “DR Test” events (the DRaaS defines formal test cycles that are support facilitated in keeping with good DR best practice) and the “Recovery Network” refers to the network the VM will be attached to during an actual failure event.  After the selections have been made, click next to validate

the configuration:

Screenshot 2014-11-12 20.44.21

Finally clicking Finish will commit the configuration:

Screenshot 2014-11-12 20.44.26

Returning to the Manage tab we can now see that the network settings error has been cleared and we are all green:

Screenshot 2014-11-12 20.44.34

Next we can check on the status of our Replication setup by clicking Replication Servers:

Screenshot 2014-11-12 20.44.47

And that’s all there is to configure!  Before we can call this an “Easy Button” for real we need to make sure it actually works.  Heading back over to our Virtual Machine list in the Web Client, we can now select any VM and pull up the Actions menu.  Notice that we now have  new option for vSphere Replication:

Screenshot 2014-11-12 20.45.34

Selecting this option will trigger the Replication Configuration Wizard.  First step is to choose a target.  For vCloud Air we select “Replicate to a Cloud Provider” and click Next:

Screenshot 2014-11-12 20.45.45

Next we can select the Target Site that we created in the previous steps:

Screenshot 2014-11-12 20.45.56

We can now select an available storage tier into which to hydrate the VM.  This is a fantastic option as it allows us some interesting flexibility.  For example, the choice can be made to run in a reduced performance mode during a disaster by selecting a lower performing, lower cost, tier of storage to fail over to.  In our case we are selecting our standard storage tier:

Screenshot 2014-11-12 20.46.02

Next we can set a policy for quiescence.  This is another fantastic option. For background, quiescing means temporarily stopping the source OS in order to bring state in sync with the target.  This option gives us the opportunity to choose a method by which the vSphere Replication server will quiesce the source.  In our case we’ve selected a Wiindows based VM, so we will use Microsoft Volume Shadow Copy Services to control OS state:

Screenshot 2014-11-12 20.46.11

The last step is to set our Recovery Point Objective.  RPO refers to the minimum amount of data loss we are willing to tolerate and directly sets the synchronization interval.  For vCloud Air RPO can be set from 15 minutes to 24 hours.  What this means is that every 15 minutes to 24 hours the source and target will be synchronized.  This means in the event of a disaster the worst case will be that we lose 15 minutes of data (disaster occurs a moment before the next sync cycle would have triggered):

Screenshot 2014-11-12 20.46.18

We can now validate the settings and commit the configuration:

Screenshot 2014-11-12 20.46.24

And with that we have pushed the “Easy Button” for DR on this VM!  Rinse repeat to cover as many VMs as you have subscribed capacity to support.  So what are my thoughts on the service?  Well I think it’s absolute fantastic.  It brings you very painlessly from “nothing” to “something” without any investment in additional infrastructure construction.  One thing we didn’t show here is that Offline Data Transfer can be used to seed the initial replication as well which is very handy in events where a large number of VMs are being protected and upload bandwidth is becoming a bottleneck.

Are there any caveats?  There are, but luckily they are all covered by the roadmap so things will continue to improve.  Some noteworthy gaps today are:

  • Not completely self service disaster coverage – The Customer Success Team support folks do need to be called in a disaster.  This will be changing down the line
  • No “fail back” support – this one is tricky.  Once up and running on the DR site, there is a 30 day window (extendable by calling support) after which point the DR site needs to be returned to cold and production shifted back on-prem.  Unfortunately there is currently no tooling, other online or offline, for data transfer back.  The replication is not bi-directional, so a manual copy event, using vCloud Connector, will have to be scheduled and performed to return to on-prem production.  This is the biggest show stopper, but is definitely on the roadmap
  • No guest customization support – as indicated in the entry, today there is no way to perform complex configuration of the target automatically, post disaster, the way you can with SRM.  This is less impactful than the failback since there are lots of ways to mitigate it (keeping like for like network configuration, writing custom scripts, etc), but does appear to be in the future plans which is a good thing.

Not too bad!  Just 3 caveats for a version 1 service in a space as complex as Disaster Recovery is a great start.  I really look forward to tracking this service as it evolves and updating this entry with new info.  Stay tuned!

Battle of the Axis Powers! GTR vs RS5!

Posted: October 22, 2014 in Cars

As promised last entry, I want to provide my impressions of how the 2014 GTR Black Edition stacks up against the 2009 GTR Premium that I owned a couple of years back.  While I’m at it, though, I decided that it would also be value to provide some insights into how it also compares to the departed RS5 while that car is still very much fresh in my mind.  Rather than take my usual meandering and prosaic approach, this time I’m going to do this in a really structured way as I think it better suits the topic.  So let’s get to it!

First Impressions – The Exterior


garage 2RS5-GTR-rear-left

From the outside the GTR is a pretty polarizing thing.  It’s all strange, sharp angles and slightly odd proportions.  Absolutely aggressive and purposeful looking, and unapologetically Japanese, it’s the type of design that rarely garners a neutral reaction.  At this stage though, a GTR basically looks like a GTR.  We’re into year 6 now of the same basic wedge and the non-traditional look has become an integral part of the personality of the car.  My 09 was a Super Silver premium edition which makes it as basic of a GTR as you can get.  Silver was enormously popular year one despite being a whopping $3000 option.  Why so much?  Well according to Nissan the paint process for Super Silver is different.  Unfortunately this makes it a bit tricky to match aftermarket, so repairs can be problematic.  In comparing it to the ’14 I can definitely say that the standard paint has more orange peel.  Other than that I’d probably be lying if I said I spotted any significant difference for the three grand.  Paint quality aside, I think the car looks better in black and I think the Black Edition adds touches that make it the best looking GTR to date.  This of course is subjective, but I’ll make a case for it anyhow.  Most noticeable are the special edition lightweight Rays rims:


Of course a black car with black wheels is without a doubt an acquired taste.  I believe ‘murdered out’ is the term the kids use these days.  Some might say ‘ghetto’.  Me?  Being a native of Brooklyn (real Brooklyn, not ‘hipster Brooklyn’) I love it!  One downside is that keeping these wheels looking good is almost certainly going to be tough.  In addition to the special rims, the Black Edition also brings… carbon fiber!  Why? Because racecar!  Also, because weight and because stability.  In typical Mizuno-san fashion Nissan has a 45 page white paper explaining how rendering the wing in carbon fiber makes some significant difference.  Safe to say to 99% of drivers 99% of the time the difference is appearance:

May not look like much, but as with all real CF it's stupidly expensive

May not look like much, but as with all real CF it’s stupidly expensive

This wing is $9000 and is extremely well fabricated.  The dark grey effect of the carbon fiber also nicely complements the similarly grey hardened plastic lower bits around the sides, front and rear of the GTR.  Other than these differences, the 2014 Black Edition is physically pretty much identical to the 09 Premium.  There are tweaks here and there worth noting for enthusiast, but anyone who doesn’t really know the car would never spot them.

In terms of fit and finish and build quality, the GTR exterior is a mixed bag.  Modern paints are a lot more environmentally friendly which is great, but the downside is that most are quite thin.  Orange peel is guaranteed these days and it’s extremely difficult to keep dark colors looking good long term.  The GTR black is a gold flake metallic which looks sharp showroom fresh, but doesn’t quite scream $110k car.  To be clear, it also doesn’t scream “Altima”


2014-01-10 12.35.57

The RS5 is an extension of the S5 design which itself is an extension of the lovely A5.  Audis equivalent of the BMW 3 series coupe (now 4 series – BMW taking the page from Audis book of separating coupes and sedans into separate lines), the A5 is a two door version of the A4.  The design has been around a good while now, but it is quite timeless and I think most would agree that it is one of the nicer small luxury cars on the road.  Unlike the GTR, this is a design that can inspire neutrality and rarely inspires any purely negative feedback.  As befits an RS model, the RS5 diverges from the base car in a few subtle ways, but they add up to a very significant net impact.    Taken in isolation it might be easy to dismiss them as similar, but seem together the difference is dramatic.  For one, the RS5 is about 6mm wider, but with sculpted sides that accentuate this flare.  It also sports a different grill with a hexagonal chain link pattern vs the horizontal slats of the A5/S5 and a more aggressive (and lower) front air dam.  In the rear, the RS5 gets a rear deck lip as well as a speed deployed rear deck lid spoiler a la the 911.  Rounding out the accents out back are a rear diffuser and a pair of massive oval exhaust tips.  Anyone who likes the A5 look (most people) will certainly love the RS enhancements.  On the downside, the RS5 is solidly “German luxury coupe” and will never be mistaken for an exotic.  There is no “woah! what the heck is THAT car!” with the Audi.  This is the downside to a design that isn’t extreme.  To get this effect from Audi you really need to step up to the $160k R8.

In terms of build quality, Audi is really knocking it out of the park these days.  A casual walk around on the RS5 is confidence inspiring.  Visible seams are minimized and where there are any, the gaps are millimeter perfect.  The entire thing just looks solid and the quality of the paint finish is very high.  Better than the GTR.  Of course how these two black cars would hold up over time is a different story.  I suspect they’d be similar.  Both cars look high dollar, but the more expensive (by a lot) GTR probably just loses in terms of fit and finish to the cheaper Audi.

All aboard! – The Interior


Much like the outside of the car, inside the cabin the GTR is a study in interesting contrasts.  Nowhere near as bad as its more vocal detractors would like to claim, its issues are more related to design language than to materials quality or fit and finish.  Those areas are actually quite good on the GTR and, between the two cars, I’d say that the GTR actually uses less plastic and the plastics that are there appear higher in quality than the Audi.  The layout, however, is a bit of a mess visually.  Ergonomically it is fantastic though!  “Function over form” is an important GTR theme.  That in itself is almost all you need to know about the car if seriously considering it.  Here is how things look inside:


Let’s talk good and bad first.  The good is that there is actually a fair bit of leather going on there including lower dash, door pulls and shift knob, and in the 2014, it is stitched leather which looks great.  The other good thing is, as you can see, the center console controls are intuitive and well presented to the driver.  The steering wheel is meaty and provides great grip.  The faux aluminum bits are now a darker silver rather than the light silver painted parts in the 09 that scratched when you breathed on them and immediately looked horrible as a result.  On the not so good side the look of the center console is a disaster.  The latest models have added carbon fiber overlays that help a bit, but they aren’t high enough quality.  The vent style and placement is also a bit odd.  The seats, on the other hand, are phenomenal.  Especially the special edition Recaros in the Black Edition which are solid feeling two toned leather and provide unbelievable, better than 911 Turbo sport seats, support.  In comparison, the 09 Premium was all of this but worse.  Nissan has done a good job, on the Black Edition in particular, of making the inside cabin a premium experience.  It is far better than the E92 M3 was, by comparison and the fit and finish of every surface is good and solid.  The doors close with a nice thud and all the levers and buttons have good response.  Some exceptions are the center console knobs which do feel a bit cheap and rubbery (ahem… parts bin) and the mode switchgear for the selectable drive dynamics options aren’t as “weighty” and solid feeling as you might expect.  Despite all the noises the car makes, none of them originate from “bits” rattling.  On the modern BMWs I’ve had (post E46), this is not the case and everything in cabin starts to “loosen up” a bit after a year or two.


It is tough to beat Audi on interiors these days and the RS5 is no exception. If there is one downside to their design aesthetic, it is that it is utterly uniform.  Every Audi from the A3 up through the R8 is extremely similar in cabin.  If you hate it you’ll hate them all, but I suspect almost no one would hate it.  On the flipside though, the special models like the R8 or RS line feel slightly less special because they share so much in common with their lesser brethren.  That said there is pretty much nothing to complain about here short of nitpicking:

2014-01-10 12.37.04

Where Audi has used carbon fiber (and they’ve used a lot of it on the RS5) they’ve used very high quality carbon fiber.  It looks absolutely fantastic.  The perforated flat bottom steering wheel is a nice touch and also provides a solid meaty grip.  Every button and knob has a weighty, high quality feel and the car is as vault solid inside as it appears outside.  The doors close with a bank vault thud and everything feels tight.  Over the course of a year of ownership the car did develop an odd interior rear rattle, though.  I never quite pinpointed it, but it wasn’t a trim piece.  Tricky to avoid in a stiffly sprung suspension, but the GTR did manage it.  My 09 had 24,000 rattle free miles (or maybe the car is just too loud to notice them? whoops! more on that later)

In terms of design language, Audi excels.  The layout is fantastic and looks as good as it works with terrific ergonomics.  The controls might be slightly less intuitive though, compared to the GTR, primarily due to the iDrive-esque controller used by Audi MMI.  The seats are wonderfully supportive and extremely comfortable.  More comfortable, but less supportive than the GTR.  I had the much maligned comfort seat option which trades the usual aggressive RS sport seats for actively cooled “sporty” seats, so this makes sense.

The not so good bits are the aforementioned plastics.  There are a lot of them (the entire dash), which is not so unusual, but they’re pretty much the same plastics you’d find in an A3 (and maybe even a VW!).  This is a bit of a letdown for a car that is upwards of $80k.

The FUTURE – In Car Tech


Here is an area where the years have been unkind to the GTR in general, but more unkind to the 09.  In the ’14 at least, all of the bases are covered – XM radio, USB/iPod interface, Bluetooth with audio streaming, GPS and a nice LCD.  In the 09 the LCD was less nice, the system was a bit slower, XM radio was missing and it had neither the iPod interface nor bluetooth audio streaming.  Tech is an unforgiving master and time marches on quickly.  The GTR is a decidedly last gen in car entertainment system, even on the ’14.  This actually isn’t necessarily bad, it’s just that you’re not going to find connected apps, a WiFi hotspot, or any of other “car meets tablet meets internet cafe” bells and whistles.  Of course it’s also arguable how necessary or realistically practical any of that stuff is.

The in car control systems are another matter entirely.  Here the GTR brings some tools to the tablet that are pretty much unheard of.  The star of the show is the car info center famously designed by the design team that did Gran Turismo in an odd “meta moment”.  This configurable touch screen based app is the equivalent of aftermarket systems that typically costs thousands of dollars.  It provides multiple customizable “ages” onto which the user can drop a number of digital gauges from a library of 20.  These aren’t just fluff either; turbo boost pressure, all of the cars internal temperatures, active g-forces, torque split front to rear and fuel flow are just a few of the areas which can be actively monitored.  I haven’t seen its like on any car and it is quite good fun:


The standard data is there as well providing current vehicle health (time to next service, active tire pressure, oil level, etc), settings customization for a variety of areas, and system info (firmware rev).

In addition to the infotainment the GTR also figures (pause for a big breath): auto lights, auto wipers, auto this, auto that, folding mirrors, two zone auto climate control, backup camera (on the ’14, no luck on the 09 and on the GTR you really need it), heated seats, power everything and a partridge in a pair tree.  All of this is boiler plate stuff for cars in this class and nothing is left out.

Now let’s shift gears (no pun intended) and focus on the parts that make the car go.  As cars inch towards essentially being rolling supercomputers, the GTR blazes the trail.  Everything is actively monitored and adjusted by a control system.  The AWD shifts torque back and forth based on driving pattern as well as vectoring left and right in response to cornering inputs.  The AWD behavior can be controlled using one of the center consoles three “because race car” switches and offers three settings: snow mode which disables the VDC (vehicle dynamic control) and effectively locks the torque split, normal mode which lets it do its thing, and race mode which lets it do its thing more aggressively.  Similarly, the transmission is a dual clutch affair, whereby a manual transmission but with two clutch plates is paired with electronic control rather than mechanical.  It’s a rugged, noisy, aggressive beast of a box.  Particularly compared to far more refined systems that are found inside Porsches, Audis and BMWs.  Because racecar!  There is a noticeable improvement in refinement at low speeds between the 09 and the 2014.  It’s night and day really and is attributable to updated transmission control unit (TCU) software.  Side note… This can be retrofitted onto the 09s using an aftermarket tool.  There have been some physical changes over the years as well which, of course, cannot. As with the VDC, the transmission behavior can also be controlled via toggle switch and has three settings.  In this case they are “save mode” which is also for snow and holds low gear, “normal” which provides a dynamic balance between aggressive shifting and smooth shifting based on conditions, but with a bias towards smooth, and “race” which has the opposite bias.  Last but not least, the suspension is actively controlled as well.  The suspension toggle allows you to switch between “a little less brutal” (comfort), “fairly brutal” (normal) and “save it for the track” (race).  Here is another area where the ’14 separates itself from the 09.  In the ’14 you really can tell that the dampening is more forgiving in comfort mode.  On the 09 you were left wondering if the switch was working!  Beyond the stuff you can control are the things you cannot.  The GTR ECU is constantly busy and the turbo and fuel systems are track grade even in the base models.  All of these systems are more advanced in ’14 than they were in 09, but you can’t really feel that day to day.  What you can feel is the steering.  “Dynamically controlled in proportion to speed” in theory, in practice, the steering is always very very heavy.  At low speeds during parking maneuvers it’s almost like driving an old rack and pinion (almost), but at speed its absolutely brilliant with fantastic response and feedback.  I didn’t detect any significant difference in steering weight from 09 to ’14.  As with many things GTR, the bias is towards performance leaving humdrum usability slightly compromised.


As in car entertainment systems go, the Audi MMI is right on the cutting edge.  It’s basically an Android Tegra 4 tablet.  The nav system is supplemented by Google Earth and Audi provides a set of in car apps.  It can’t be customized, nor can you get at the Android shell, however, and it all feels a bit slow (slower than normal even for Android which always feels slow to me).  The basic nav, sans Google, isn’t bad.  The car is also equipped with its own cellular radio (T-Mobile 3G), the downsides being 3G and the fact that it carries an extra bill for service.  Once enabled you can do magic feeling things like Google search a point of interest right from the car and set it as a destination.  Obviously all of the usual stuff is there as well, fully functional bluetooth, iPod/AUX interface and voice commands.  Building on the rich data capabilities MMI adds WiFi hot spot functionality and an IOS app which allows an “i device” to link up and do various things.

Audi A3 Sportback e-tron

It’s all very impressive and forward thinking, but it’s not without its downsides.  As mentioned it’s all a bit slow.  Also, realistically, a lot of this stuff is fluff.  Particularly since cars aren’t autonomous (yet) so you still have to actually, you know, focus on driving.  It’s questionable if anything is really needed beyond a decent radio with USB, a GPS and a hands-free phone.  All of this app business does feel a bit superfluous (an interesting statement coming from a deep, deep technophile I know).  Still, it’s cool though and the UI is great.  Other uses for the LCD include the car info stuff (basics only here – service intervals, tire pressure, oil level, yada yada) and the backup camera (handy since none of these sport coupes have what you’d gall great rear visibility)

In terms of car control systems, Audi is one of the few marques that matches the GTR almost point for point in sophistication.  Quattro has come a long way and is unbelievably capable (and complex) these days with full torque vectoring abilities.  Rather than toggle switches, subsystem behavior is controlled via the MMI.  Audi provides 3 levels of settings: comfort, dynamic, and auto.  Dynamic is comparable in concept to the race modes on the GTR and firms everything up.  In the case of Quattro it prefers rear bias and its torque shifting is both aggressive and performance focused.  Comfort goes the opposite route and auto attempts to strike a balance based on current driving conditions.  The S-Tronic transmission is a dual clutch box also, but a very different one than the GTR.  It’s every bit as quick and sharp in dynamic mode, but it is far smoother in comfort.  A typical driver might actually be fooled into thinking its a torque converter automatic.  In manual mode or dynamic though, the violent throttle blips on downshift and instantaneous upshifts tell a different tale.  One interesting thing worth noting is that S-Tronic is much more livable in Dynamic mode than I found BMW DCT to be (on the E92 M3) in S5.  It is also the quietest dual clutch I’ve ever driven.  With the GTR you hear the turning of every gear and rattling of every plate, with the S-Tronic you literally don’t hear anything regardless of mode. Rounding out the “Drive Select” systems are throttle and steering.  Dynamic throttle controls means that the gas pedal is now digital rather than analog.  Pressing the pedal really hard will have an effect determined by the computer reading the pedal input.  Even a heavy foot might lead to a reasonably tame response in comfort mode whereas in dynamic mode, you’ll be flying.  This particular gimmick is also present on the E92 M3, but absent on the GTR.  Of all of the control systems, I personally find throttle the least noticeable, but it’s generally harmless.  Steering, on the other hand, is extremely controversial.  The RS5 implements electronically controlled power steering.  I think of this system as the devil.  The idea is that variable hydraulic power steering (as on the E92 M3 or GTR) isn’t enough.  Instead steering wheel input also becomes digital and is parsed by a computer.  The idea is that around town or in the mall parking lot the car can be driven around with one finger, while on the highway at speed it transforms into a proper sportscar.  Great in theory but, in my experience, these systems feel vague and imprecise in practice and never seem to be in the right mode at the right time.  That said, parking the RS5 is effortless and the low speed steering isn’t as light as lesser Audis which is a good thing.  Once again these two settings are influenced by the comfort, dynamic and auto modes.

Once again alongside the gee whiz bits come the now baseline set of power everything, auto everything, adjustable everything.  Plus blind spot detection!  Handier than it might sound.  But no folding mirrors like on the GTR (’14, not ’09).  These are also handier than they might seem.  Unfortunately you really can’t have it all!

Ride Quality, Comfort, Experience and Ergonomics


We’ve already touched on some of these points, but they are worth reviewing.  All driver controls in the GTR are extremely intuitive with excellent ergonomics.  Everything you need is in reach and the placement is logical.  The seating position is excellent, if a bit high, and forward visibility is very good.  The nose is a bit long, and the hood has bulges, so it can be hard sometimes to get a sense of the cars “edges”, but this is a bit subjective.  I feel the same way (a bit worse actually) in a Corvette or Viper, but not in a 911.  Others may not notice it at all.  The steering wheel both tilts and telescopes and finding the optimal position is very easy.  Seat adjustability is also excellent with 8 way control and a heating element.  The custom branded Recaro seats are extremely firm in the Black Edition, but they are also extremely supportive.  The 09 Premium had standard seats which were a leather/faux suede mix, also quite supportive, but a good bit softer (also made or at least designed by Recaro, I believe).  The backseats do exist, but things are very tight back there with limited leg and headroom.  Men, women and children below 5’8″ or so should be ok, especially for short rides.  Up front its much, much roomier with both the driver and passenger having plenty of head, shoulder and leg room.

Starting the car up works just the way any modern keyless entry system does.  Have the key somewhere on you, step on the brake, and push the start button.  The GTR roars to life with an overwhelming sense of occasion.  The dual clutch transmission comes online in a wave of mechanical chattering that can be disconcerting to the uninitiated.  The engine is loud, as are the turbos, in a uniquely “big forced induction” way that most tend to either love or hate.  The cabin has some sound deadening, but not too much (every pound counts), so much of the cacophony is unfiltered.  The 09 was the same on all counts but worse.  Word has it the ’15 is the same but better.  For any of them, in my opinion, you have to like it to take it.  Me? I like it!

Getting the car rolling the first thing most will notice is that the steering is heavy.  The more used to modern electrically assisted power steering systems you are, the heavier it will seem.  At low speeds (puttering in and out of a garage or around a parking lot), the GTR can feel a bit clunky and ungainly.  The ’14 transmission is far better in this regard and none of it is a showstopper anymore (quite frankly I never thought the 09 was all that bad, but it was too much for many)  The second thing you’ll notice is that there is pretty much no turbo lag.  The throttle is a bit heavy also but it is also forgiving.  Too light throttle application and Godzilla isn’t very interested, you’ve got some margin of error it rampages though, so the power is easy to modulate.  This is key to the now legendary accessibility of the GTRs massive performance potential.  As the car gets rolling you’ll notice that the ride is very firm.  On harsh roads it can be jarring.  The 20″ wheels and runflats don’t help.  Non runflats make a big difference.  Comfort mode on the ’14 helps a fair bit also by relaxing the dampeners.

At speed the GTR really comes alive and getting to speed happens fast.  Fast enough that you have to watch yourself so you don’t end up in jail.  Very few (almost none) reasonably common cars have this kind of acceleration straight from the factory and none make it this easy to use except the 911 Turbo.  You can be doing 100MPH quite literally before you know it and while the car absolutely communicates what is happening extremely well, it is also just completely composed while doing it.  So you can feel the speed, but the car never feels unsure of itself, and it’s all building very fast.  Given all of this it’s probably important that the brakes work too.  The good news is that they do.  Professional mags have recorded stopping distances from 60-0 at around 105ft.  This is fantastic and with the well ventilated rotors, there won’t be any brake fade on the street.  As well as they stop they feel like they stop even better which is also important.  There isn’t any drama, just extremely confident action, excellent pedal modulation and a firm stop.

It might come as a surprise to the majority of internet bench racers, but eventually a car tends to have to turn!  When that time comes that overly heavy steering at low speeds suddenly becomes a wonder.  The steering is extremely direct and communicative.  None of the synthetic “is there a road there?” vaguery that comes with even the best electrical systems.  I don’t know that it exceeds the steering feel of an M3 (pre EPS), NSX or 911 (pre EPS), my benchmarks, but it’s absolutely up there.  The aggregate of all of these experiences is that you always feel in control of the GTR.  It’s very confidence inspiring.  There is zero body roll which is surprising for such a heavy car.  I like to view a cars subjective level of driver engagement based on how close what I perceive the car to be doing, and where it’s wheels actually are, to reality.  In this respect the GTR is very, very good.  These qualities aren’t unique to the 09, the ’14 shares them as well, but all of the little tweaks Nissan makes each year do add up to a noticeable, if subtle, overall improvement.  The 09 felt great, the ’14 feels better.


In comparison to the GTR, it’s probably best to speak in terms of deltas with the RS5.  It is more comfortable in every way.  The suspension, while still pretty firm – especially with optional 20s, is more compliant and the seats are more comfortable (comfort seats remember), but less supportive.  The entire cabin is a quieter place to be with no real mechanical chatter. Push start and she purs to life with a subtle grumble and no “WTF is that?” moment. The ergonomics are just as good, as is the driver position and the adjustability of the seats.  Visibility all around is similar and the view out the front is subjectively a bit better courtesy of the shorter nose and smoother hood.  There is a fair bit more room for passengers in the back, but you certainly won’t be ferrying giants around either.  Getting going the car feels almost the same as an automatic equipped Audi A5.  The dual clutch really is that smooth and the electronic brain governing throttle and steering are most convincing at low speed.  As mentioned the steering feels really boosted, and is extremely light during parking maneuvers, but it is heavier than a 428i or new Audi S4.  Once rolling it does firm up nicely and feels good at speed.  It’s never truly direct though and it does maintain that subtle vaguery that is a feature of EPS.  The RS5 definitely passes my “do I know what the car is doing and where it is?” test and the response to driver inputs is very direct, it’s just that benchmarked against the likes of a GTR, E92 M3, 911 or NSX it comes up short in terms of how much of the road is communicated through the wheel.  There is also a bit (not much mind you, but it’s there) of body roll comparatively.  This makes sense as those aforementioned cars were all built specifically with varying levels of track duty in mind whereas the RS5 was really designed for extremely high speed cruising (think Autobahn).

Stabbing the go pedal is an interesting experience for a number of reasons. First is the electronic brain behind the throttle which, depending on the current driving mode, will impact how much throttle you need to give to get moving.  Second is the nature of the engine itself.  The RS5 uses a wonderful, normally aspirated, high reving small block V8.  450 horsepower out of 4.2 liters at 8250RPM.  Engines like this are a very specific experience in that they have massive top end, and a rev happy nature that hauls when you goose them, but they produce proportionately low torque down low.  There is no tire twisting, axle bending, explosion of force unless you really gun it.  If you do that, then look out!  This takes some getting used to for folks who are used to relying entirely on low RPM torque to get moving (particularly folks from well tuned turbo applications).

The dual clutch is butter smooth and almost never misses a beat.  In dynamic mode downshifts are accompanied by a booming and aggressive blip of the throttle.  Upshifts are blink of an eye quick.  In dynamic it’s very smart about holding the revs without getting too crazy based on driving conditions, in comfort it isn’t too lazy and is perfect for daily driving duty.  On the surface it seems like the S-Tronic is just superior to the GTR dual clutch in every way, and for most drivers in most situations that is almost certainly true.  It’s worth noting though that the GTR transmission is handling a lot more power and a ton more torque and can handle even more still just fine.  It also never misses a beat when it counts the most which is flying along a racetrack.  So once again, the GTR makes concession to “because racecar” that the Audi doesn’t bother with.  The Audi, instead, goes the other way and makes concessions to “high speed luxury cruiser”.

When its time to stop the brakes also feel great, but not GTR great.  Confidence is just a bit lower, although pedal modulation is good and stopping distances are too. The same actually, as the GTR at about 104 feet from 60. On the street its all fine, on the track you can see how the GTR would be the better companion.

Overall the RS5 driving experience is an interesting one.  To me it slots somewhere in between the M3 and the C63.  Not an outright muscle car like the AMG is, with brutish torque and a tail happy character, it still feels more muscular than the E92 M3 which is a much more surgical instrument in terms of feel despite also carrying a high revving small V8 and not being that much lighter.  I think all of these cars, though, are within a stones throw of each other and are amazing both on a track, and off.  Light years beyond both luxury cars of just ten years ago in terms of comfort, technology and durability and sports cars of the same time period in terms of acceleration, braking and handling.  Really incredible stuff!  The GTR, on the other hand, was built to, and successfully does, battle the 911 Turbo S.  It’s a different class really, and even a short drive on local roads demonstrates that.  The RS5 is absolutely easier than the GTR though.  As accessible as the GTR is, it’s accessible for a supercar.  The RS5 is a high performance coupe that rolls just like a normal car.  There is a big difference.  Handing the GTR over to a stranger I’d have some concerns potentially just because of the points called out above.  Not Viper level concerns, but there is some instruction required.  The RS5, conversely, is a lot like the S5 much of the time.  This can be bad, this can be good.  It all depends on priorities.

Performance – A Short One

This section is a no brainer, no need to divide it.  We can just refer to the numbers here.  The RS5 pulls the 1/4 mile in a very healthy 12.4 at a 109MPH trap and has been tested at anywhere from 3.9-4.4s to 60 with launch control and who knows without it.  The GTR runs the 1/4 in 11.2 at over 120MPH and has been tested below 3 seconds to 60 using launch control by basically every publication on earth with one or two (suspicious) exceptions.  Without launch control the GTR tends to add a tenth or 3 from past tests.  So we’re talking about a car where, just getting in and stomping the gas, you’re hitting 60MPH in below 3 and a half seconds.  That’s solidly in the realms of insanity.  This same difference plays out in lap times as well with the GTRs Nurburgring showings down below 7:20, while the RS5 is nearly a minute slower at a, still respectable, 8:00+

The Soundtrack and Closing Thoughts

Before wrapping this one up I think the aural experience of these two beasts needs to be discussed if for no other reason than that they are so dramatically different.  We’ve talked about how the RS5 uses a high revving small block V8.  My RS5 was also equipped with the factory sport exhaust.  If there is one thing I will miss about the RS5 (possibly as long as I live!) it is the soundtrack.  I can honestly say that nothing short of a Ferrari sounds as good as the RS5 for below $300k.  It is significantly better than the M3 sound.  Some might argue in favor of other cars, and that’s where it gets subjective, but as a specific representation of an 8000RPM V8 (which is a very unique thing) the RS5 is just phenomenal.  Not just exhaust either, but also the actual engine. I find these days the vast majority of “enthusiasts” really don’t even know the difference and focus entirely on exhaust sound.  This really is because most actual engines just don’t sound all that impressive.  High revving engines do.  Both 8 cylinder (Ferrari, E92 M3, RS5) and 6 (911 GT3, NSX) As for the exhaust, that sounds incredible also and the valving ensures that she only screams when you’re really gunning it, not when you’re rolling out of the garage at 6am.  Great stuff indeed.

As for the GTR, the soundtrack is possibly one of the most unique in production.  The whooshing, chattering, clunking and grunting really does make you feel like you’re in a racecar.  The turbos and VR38DETT V6 sound a lot like a jet engine.  The stock exhaust doesn’t have the guttural growl, or high pitched scream, of some of the aftermarket parts, but it does make a music of its own that sounds fantastic.   All in all the total package is a racier, more technical, but really less exotic experience than the RS5.

That’s probably a good note to conclude on actually.  If you’ve hung in there this long you can probably guess it, but in the final accounting the GTR is a race car for the street that a normal person can drive, but in doing so they will pay some tax for the privilege.  It can punish you in ways that a normal car wouldn’t, but it can still do plenty of normal car duty realistically (carrying passengers, driving daily, driving in snow).  Matching its absolute performance with a warrantied, factory stock car isn’t easy no matter what your budget might be, but below $150k it’s pretty much impossible.  This performance characteristic hasn’t gone unnoticed and, as a result, the car has become a legend.  It’s also rare enough, and expensive enough, that it has crossed into exotic territory and carries all of the good, and bad, which that status brings. The changes Nissan has made over the years are possibly the most impressive part of the GTR experience.  Having direct experience now with both the earliest model and the (almost) most recent I can say that the current ones really are a whole different car.  The 2009 was a great starting point, but it needed to go in the direction its gone.  For the Black Edition the changes are limited in 2015 (the softer suspension is a Premium model change), but even the self adjusting and leveling headlights present truly effective and impressive value to the driver.  I have no doubt that Nissan will not be resting and you have to respect that.

The RS5 is a different thing altogether.  It’s a high speed cruiser than can ferry a banker to work on the Autobahn in style during the week, and still handle a solid track day on the weekend and hang in there with the likes of an M3 or 911.  This is equally impressive in its own way.  The performance is obviously a big step down from the GTR, but the comfort, luxury, and practicality are a big step up as well.  The RS5 is very much a “sleeper”, in some ways.  Most of the population would simply assume it is a highly sensible and nicely setup A5 (or at least an Audi coupe of some sort).  A very “no apologies” car, an Audi is the type of vehicle you can bring anywhere.  Enough cachet to satisfy brand snobs, but not really enough to offend the reflexively anti-brand.  To enthusiasts, and especially Audi fans, the RS badge says it all.  There is no doubt that this is a special car even if it is hiding in standard clothing.

So how to sum this up in two lines?  If your goal is no holds barred performance, and a true supercar experience, in a quirky but exotic package with a heritage that challenges the elite status quo, then the GTR is the car.

If the goal is high speed luxury in a wolf in sheeps clothing package that requires neither apologies nor expectations and brings a luxury marque ownership experience to true enthusiast car ownership, then the RS5 is the car.


It goes without saying that when you need something done (and by “done” I mean bringing 85 megatons of melodrama to an almost completely ridiculous situation) you call in the big guns:

They were trying to KILL it!!

They were trying to KILL it!!

And when the “situation” happens to be an 1100 foot tall radioactive dinosaur that is indestructible and breathes lightning, well sometimes you even need to call in an assist just to make sure people realize that this shit is serious!

MMMrrrmmph... unnnnfff.... aaarrraAAAAarrrooo!

MMMrrrmmph… unnnnfff…. aaarrraAAAAarrrooo!

Woah.  That’s a lot of gravitas.  I think we all need a break here before we collapse under its intense weight! OK, that’s better.  What has these two titans of overstatement so rattled?  Well.  I think we all know.  There can be only one who inspires such dread in men!  Or can there?  Is there not another that shares the legendary namesake? Indeed there is.  One which requires an entirely different kind of talent to tame it!

I need a 10 second car!!! (RIP brother!)

I need a 10 second car!!! (RIP brother!)

All right all right, that’s enough of this.  I think we all see where this is going, no?  Last entry I hinted that there could be some additional instability in the ComplaintsHQ garage despite all promises, oaths and pledges to the contrary (all things begging to be broken, if we’re honest!).  For those just joining, allow me to catch you up.

In Japan there is a man named “Mizuno-san”.  He’s an insane genius who says things like “I wanted to build a supercar you could drive with your child, your wife, your lover” (true story).  Proof of his insanity is that, despite working for Nissan, he decided that it was time to topple Porsche off of their high horse.  The punchline is that somehow this madman did it.  Japanophile car nuts are no doubt fully aware of the Skyline GTR legend.  Most others, though, almost certainly are not.  One must understand that in Japan, there is a time honored tradition of taking crappy commuter cars (ok, ok… let’s call them “basic transportation”) and, through the application of dark automotive magic, making them “race car”.  That’s not to say they are entirely alone in this insane pursuit; Ford and Volkswagen in particular are also quite adept, but the Japanese seem to have an almost intrinsic cultural flare for it.  Always conservative and cautious in their approach to the US market, however, the Japanese are careful about how (and indeed if) they send these wonderful little beasts over.  The two most familiar faces on these shores are the technical tour d’force Mitsubishi Lancer Evolution (based on the tragic Lancer sedan) and the scrappy Subaru Impreza WRX STi (based on the quite good Impreza).  But there is, as they say, another.  Unbeknownst to most (and by “most” I mean basically only the US since bewilderingly everywhere else on Earth had access) Nissan was in this game too.  Outside of the US, Nissan built a basic sedan called the Skyline.  In its tamest form it was about as boring as possible.  Once they slapped that “GT-R” on the back though… Well… Then it became a thing indeed!  The Skyline GT-R powered through generation after generation looking a bit weird and challenging some of the greatest performance cars in history at a variety of tracks.  As its legend grew, and the US remained oddly frozen out of the fun, an actual grey market for the car (including in some cases conversion to left hand drive) formed.

In the mid 2000s, having embarked on his mad science project, and deciding we had had enough after some 25 odd years of denial (sounds like a marriage! bah dum bum!) Mizuno-san announced that the next generation GT-R would not only not be a Skyline, it would instead be purpose built from the ground up, but would be heading to the US!  It was, he crowed, going to humble the 911 Turbo (Turbo) at a pricepoint of $60k.  It would also solve world hunger and bring peace to the Middle East! (kidding about those two! although maybe…)  It should come as no surprise (especially given it has already been spoiled top of paragraph) that he actually made good on these boasts.  In 2008 the Nissan GTR (no hyphen) greeted the world to much fanfare, and also healthy skepticism.  Like the NSX more than a decade before, it rocked Europe to its core.  Six years on the GT-R has solidified its place in the automotive pantheon, continually pressing the 911 Turbo S (the S mind you!) to be better.  It has been dubbed by the faithfull Godzilla! (that’s what all the lead-in was about, if you were wondering)  And unlike the NSX, each year the car has been heavily evolved.  Heady stuff.  Of course in classic “bait and switch” fashion, the price has also skyrocketed.  That whole “$60k” bit lasted one year.  Today the cheapest GTR is $100k and the most expensive is $160k.  The “bargain supercar” is still a “bargain” compared to a Bugatti Veyron, but it’s actually catching up in sticker price with its arch rival from Stuttgart which is a bit odd since it’s a Nissan.

Keen readers will note that I am both a big fan, and prior owner, of the mighty Godzilla.  Mine was an 09 which really is a crude thing compared to the latest editions and is a good bit slower as well (“slow” being relative as the 09 is faster than most cars on the road).  Each year Nissan has not only tweaked endurance and performance, but also fit and finish, materials quality, ride comfort and a whole host of other things.  With each new GTR release my curiosity has grown stronger, but the cost of entry has kept pulling farther away.  This trend seemed unlikely to falter since Nissan both produces, and sells, hardly any of these things and there are generally enough buyers to grab them.  This year though, something went wrong.  Maybe the actuaries screwed up.  Maybe the market is soft.  Maybe the GTR is getting old.  Either way there are a lot of ’14s still hanging around (and “a lot” in GTR terms means a few hundred nationwide).  The dealers must have been grumbling because along came this:


$10k is a lot.  On a car where the typical incentive is $0k it’s really a lot.  Of course Nissan did something odd and made this $10k to the dealer.  That means that until there is critical mass, they might not pass it on.  Every car they sold post this deal, without eating into the $10k, made them money.  Over time though, that critical mass was reached.  As of last month, there were about 100 2014 GTRs left in the US and dealers were getting aggressive.  Lack of demand for the car already leads to some decent deals (invoice is typical, cutting into hold back is possible)  Add $10k to that and you’re talking serious money off.  More importantly for me, though, taking upwards of $20k off of MSRP actually brings it in range of what I can consider pulling the trigger on (albeit agonizingly painfully!)  I decided that despite having a good German girl in the garage, I’d walk the path of the dirty dog and start sniffing around Japan again!

I had some criteria in mind in order to not feel like I was throwing good money after bad (I was going to be doing that, but no point feeling that way!)  Right off the bat a 2014 is a fantastic thing if your only experience was an 09.  The difference truly is dramatic.  New vs used is another amazing thing.  Luckily, these two criteria were built-in.  I decided that another Super Silver would not cut it and then, in a fit of irrational exuberance decided that a mere Premium wouldn’t cut it either!  Black Edition or bust! (hey, it’s only money right? yikes!)  The last criteria I set was that the car would have to be local.  I put this in as a “fate” test.  If it was meant to be then a car would turn up within driving distance (call it 100 miles).  If not, well then I’d be saving money and there’d be no monkey business!

This mission immediately required quite a lot of hunting, despite all of my restrictive criteria, and as is always the case, it seemed utterly impossible to replicate anything like the deals that people were, anecdotally, getting left and right according to the forums.  Perseverance can sometimes pay off though and right as I had given up, I discovered the good folks at Ramsey Nissan in NJ (shout out to Eddie, a wonderful GTR sales specialist)  Ramsey had not one, but two 2014 Black Editions in black.  I had narrowed color selection down to black or white, with white preferred but black acceptable, so black could work; especially since the nearest remaining white examples were in Virginia!  Car deals, particularly ones involving a trade, are like an epic war story where at any moment, a detente may be called.  Let’s cut to the chase and see how this one ends:

Oh no he DITENT!

Oh no he DIT-ENT!

Oh yes I did!  That, dear readers, is the 2014 Black Edition next to my poor, unloved RS5 that I admit I was perhaps undeserving of!  It is on its way next door to the Audi dealer where I strongly suspect it will find a more passionate owner than I, who will be able to appreciate it as it deserves!  As for me? Well… Can anyone truly resist the power of GODZILLA?  Coming soon I will do a rundown of my driving impressions of the 09 vs the 2014.  Stay tuned!  In the meantime, car porn!

GTR-cabin GTR-dash GTR-recaros RS5-GTR-front RS5-GTR-front-left RS5-GTR-rear-left RS5-GTR-rear-right