The mainstreaming of cloud based service delivery into the standard IT portfolio is heralding a fundamental shift in how technology professionals interact with resources and capacity. There is still some “debate” about this shift, but honestly I remember very similar feeling debate about the PC at the tail end of mainframe and midrange dominance, about the Internet in the early 90’s, about virtualization around 2005 or so and about consumerization as recently as the launch of the iPad. Each case the skeptics and cynics simply failed to recognize a fundamental shift. If you look at it, each of these shifts is evolutionary; they are all related. It’s a continual “democratization” and “commoditization” if technology. Cloud fits well into this family tree and the way forward is clear for anyone willing to see it.
With that said, what will the impact of this latest shift in how we use technology be? Ultimately we are moving towards a model where very few folks are intimately involved with the business of building, caring for, and managing the life cycle of base infrastructure. The days of huge departments of storage, network and compute experts, each spending thousands of hours in “labs” testing, validating and qualifying, kit are drawing to a close. Developers want to be able to fail fast and get to production quickly. Business analysts and architects want to help them create solutions that are sane and efficient for the business. And the business who pays for technology professionals wants to pay folks who keep their eye on the bottom line. Building and running data centers was never a core competency from the CEO view, it was an ugly and expensive “must have”. There are of course some specific exceptions to this, and some use cases within huge enterprises are specific industries that enjoy true differentiated benefit from building the actual plumbing, but they’re rare and getting more rare by the hour. Gear is there to run apps and apps need performance characteristics and capabilities. Where and how they’re delivered, when it comes right down to it, really doesn’t matter. I’ve seen this first hand over the past few years as I’ve been directly involved in some very forward looking “cloud first mover” cases that many skeptics would insist couldn’t possibly ever happen.
If you read the title you may be wondering why I specifically mentioned a component vendor in an article that seems to be all about how components don’t matter. Fair enough! I’ve been having a lot of the same conversations lately, but under a different banner, about services and the obfuscation of the details of the physical tier and it got me to thinking. I try to avoid predictive articles because they often make you look silly down the road, but one of my better ones was penned back when AMD was making overtures towards ATI. What I imagined their long term plan to be ultimately came to pass in the form of the APU. Flush with that success, and hoping lighting can strike twice, I’ve dusted off my AMD crystal ball for another viewing!
Intel has exerted dominance over AMD for a good while now and, honestly, with good reason. For several generations they have had a significant IPC advantage. AMD, on the other hand, has had the cheaper parts in an absolute cost sense. Most IT shops though, have stuck with Intel and Intel based kit in order to be “safe” and to have a good line of sight on both capacity planning and hardware life cycle management. Here’s the rub though…. Those things are going away.
With the exception of edge cases that absolutely require specific versions of microcode (use AVX instructions for example), most workloads are simply “x86”. Cloud capacity is purchased indigestible chunks, not server by server. These chunks are designed to map to application performance characteristics and there tends to be support and SLAs behind their delivery. In almost all cases, no real details about the physical infrastructure foundations, and certainly no guarantees that they will remain constant, are provided. Amazon provides some CPU guidance, mainly to cover those microcode specific edge cases, but they reserve the right to change that over time and make no promises with regards to hardware life cycle. As the industry matures we will see developers continue to put more distance between themselves and hardware as a result of this new consumption based model. It maps well to their real focus. As they increasingly write to abstraction layers and app containers, and as the capacity they consume is increasingly provisioned in a resource centric, self service way, we will see the underlying hardware start to “not matter”. If I’m right, I think this trend could be a huge boon for AMD.
Crazy? Consider this. AMD has all three next generation consoles. Why? Consoles are a commodity black box where the developers really care only about the APIs and dev kits and the users care only about the apps and ecosystem. It’s a performance leveled and controlled environment. The cloud service provider is buying components at staggering scale. They need to deliver compute cycles, RAM, storage, IOPs and bandwidth en masse. And they need to do it with slim margins and gigantic volume. With a customer that doesn’t care how they deliver it, just that they meet their SLA and always have capacity available, IT services in a future “cloud first” model is likely to look a heck of a lot more like XBox Live than it does like the typical enterprise datacenter circa 1999. Think about it…