What’s the correct answer to this question?
Where did Intel decide to put the memory sockets on its latest motherboard – the 4-socket S4600LH2?
If you’re curious, you can see where the memory sockets are located here, but that’s not the correct answer. The right answer is:
Long ago, we all decided that Intel was smart enough to figure out where it’s best to place components on its boards. While hardware partners may have some concerns, most end-user organizations couldn’t care less. We care about things like cost and SLA, but the motherboard hardware architecture is not that interesting. That said, when you buy a server from Intel, are you paralyzed with this overwhelming fear of lock-in? Odds are that the thought never even crosses your mind. Our x86 servers, desktops, laptops, etc. all have industry standard inputs and outputs like USB, PCIe, and Ethernet. So we can buy a server, integrate the appropriate add-ons using PCIe sockets or over the network, and we’re good to go.
Now consider the role of data center infrastructure today. It’s easy to argue that API-driven, programmatic software-defined infrastructure is quickly becoming “table stakes” and a requirement to simply stay in business. Every business in the world will reach a point where agile infrastructure is no longer a differentiation but rather an expectation, whether hosted locally or as a cloud service. So if everyone just needs it, what’s the business value of building custom infrastructure solutions? It’s great for IT services companies, but I’d argue that it’s not great for your organization. Rather than spend time on projects that don’t differentiate your business, shift your teams to work on technology that provides a competitive advantage.
How an application or service connects to infrastructure and how infrastructure interconnects will become less and less interesting in the future, especially for all core commodity infrastructure services that are universal among every IT organization and aren’t a differentiator (e.g., basic compute, storage, network and security management). So instead of buying servers, you’ll buy turnkey programmable infrastructure pods running integrated software stacks such as EVO:RAIL or EVO:RACK. Ultimately, the software-defined infrastructure and associated management services will have software updates delivered from the cloud, similar to how you receive consistent updates for iOS and Android mobile devices today.
Now let’s go back to the earlier motherboard example. To get you comfortable with not caring about how infrastructure is interconnected, our future pods need to have industry standard inputs and outputs. Instead of ports like USB, suppose that apps, services and tools integrated with infrastructure pods using industry standard APIs such as OpenStack APIs, or Cloud Foundry or Docker APIs.
Now you get the lower total cost of ownership and simplicity of data center infrastructure, without fear of lock-in. This doesn’t have to be just a VMware vision and I bet that you will see multiple vendor variations of these solutions in the future. Naturally, there’s not going to be a one-size-fits-all solution, just like there’s not a one-size-fits-all server today. Enterprises and even smaller cloud providers will buy bundled infrastructure solutions based on cost and SLAs, just like server purchases today. Third party infrastructure components will integrate with these infrastructure pods similar to how you’d integrate custom hardware with Intel servers using PCIe slots today.
Infrastructure will never go away, and we’ll always need experts to build and deploy infrastructure solutions; however, most IT operations professionals will work with larger building blocks in the future. Remember, services organizations have made billions of dollars convincing every enterprise that they need a custom infrastructure solution, so I’m sure that less forward-looking services companies will tell you that these bundled solutions are a bad idea. Just ask yourself if building a custom solution to a commodity IT problem will really be worth the extra expense and effort. Public cloud IaaS solutions are highly standardized for a reason– fewer variables lead to lower cost agility. If you look at the number one reason for private cloud failures, it’s complexity attributed to unnecessary variables in the architecture. Many of us have made careers out of building complex solutions, so evolving to work with larger building blocks will take an adjustment. Electronics technicians began to go through a similar evolution 25 years ago and still have successful careers, and IT operations folks will as well.
Having an industry standard means to decouple applications, services, tooling and instrumentation from infrastructure will give you some architecture simplicity you’ve never had before. OpenStack could become the USB of the data center. As infrastructure evolves, just keep focused on the true sticky points– apps, services, tooling, operational processes, etc.- – that’s where you absolutely need the industry standard interfaces. How components within an integrated pod communicate should be an implementation detail left up to vendors. Just like how you don’t care where Intel places memory banks, how a hypervisor or container host decides to talk to storage, for example, will become an implementation detail that we won’t care about. It’s easy to stick with what we know and stay in our comfort zone, but letting go of tasks that keep us in business and provide zero differentiation will free you to truly innovate. Isn’t that why we all got into this business in the first place?
Note: Originally posted to the VMware CTO blog. You can add comments there.