What’s the correct answer to this question? Where did Intel decide to put the memory sockets on its latest motherboard – the 4-socket S4600LH2? If you’re curious, you can see where the memory sockets are located here, but that’s not the correct answer. The right answer is: Who Cares?!!! Long ago, we all decided that Intel was smart enough to figure out where it’s best to place components on its boards. While hardware partners may have some concerns, most end-user organizations couldn’t care less. We care about things like cost and SLA, but the motherboard hardware architecture is not that interesting. That said, when you buy a server from Intel, are you paralyzed with this overwhelming fear of lock-in? Odds are that the thought never even crosses your mind. Our x86 servers, desktops, laptops, etc. all have industry standard inputs and outputs like USB, PCIe, and Ethernet. So we can buy a server, integrate the appropriate add-ons using PCIe sockets or over the network, and we’re good to go. Now consider the role of data center infrastructure today. It’s easy to argue that API-driven, programmatic software-defined infrastructure is quickly becoming “table stakes” and a requirement to simply stay in business. Every business in the world will reach a point where agile infrastructure is no longer a differentiation but rather an expectation, whether hosted locally or as a cloud service. So if everyone just needs it, what’s the business value of building custom infrastructure solutions? It’s great for IT services companies, but I’d argue that it’s not great for your organization. Rather than spend time on projects that don’t differentiate your business, shift your teams to work on technology that provides a competitive advantage. How an application or service connects to infrastructure and how infrastructure interconnects will become less and less interesting in the future, especially for all core commodity infrastructure services that are universal among every IT organization and aren’t a differentiator (e.g., basic compute, storage, network and security management). So instead of buying servers, you’ll buy turnkey programmable infrastructure pods running integrated software stacks such as EVO:RAIL or EVO:RACK. Ultimately, the software-defined infrastructure and associated management services will have software updates delivered from the cloud, similar to how you receive consistent updates for iOS and Android mobile devices today. Now let’s go back to the earlier motherboard example. To get you comfortable with not caring about how infrastructure is interconnected, our future pods need to have industry standard inputs and outputs. Instead of ports like USB, suppose that apps, services and tools integrated with infrastructure pods using industry standard APIs such as OpenStack APIs, or Cloud Foundry or Docker APIs. Now you get the lower total cost...

Read More