content top

What a Motherboard Can Tell Us About the Future of Data Center Infrastructure

What’s the correct answer to this question? Where did Intel decide to put the memory sockets on its latest motherboard – the 4-socket S4600LH2? If you’re curious, you can see where the memory sockets are located here, but that’s not the correct answer. The right answer is: Who Cares?!!! Long ago, we all decided that Intel was smart enough to figure out where it’s best to place components on its boards. While hardware partners may have some concerns, most end-user organizations couldn’t care less. We care about things like cost and SLA, but the motherboard hardware architecture is not that interesting. That said, when you buy a server from Intel, are you paralyzed with this overwhelming fear of lock-in? Odds are that the thought never even crosses your mind. Our x86 servers, desktops, laptops, etc. all have industry standard inputs and outputs like USB, PCIe, and Ethernet. So we can buy a server, integrate the appropriate add-ons using PCIe sockets or over the network, and we’re good to go. Now consider the role of data center infrastructure today. It’s easy to argue that API-driven, programmatic software-defined infrastructure is quickly becoming “table stakes” and a requirement to simply stay in business. Every business in the world will reach a point where agile infrastructure is no longer a differentiation but rather an expectation, whether hosted locally or as a cloud service. So if everyone just needs it, what’s the business value of building custom infrastructure solutions? It’s great for IT services companies, but I’d argue that it’s not great for your organization. Rather than spend time on projects that don’t differentiate your business, shift your teams to work on technology that provides a competitive advantage. How an application or service connects to infrastructure and how infrastructure interconnects will become less and less interesting in the future, especially for all core commodity infrastructure services that are universal among every IT organization and aren’t a differentiator (e.g., basic compute, storage, network and security management). So instead of buying servers, you’ll buy turnkey programmable infrastructure pods running integrated software stacks such as EVO:RAIL or EVO:RACK. Ultimately, the software-defined infrastructure and associated management services will have software updates delivered from the cloud, similar to how you receive consistent updates for iOS and Android mobile devices today. Now let’s go back to the earlier motherboard example. To get you comfortable with not caring about how infrastructure is interconnected, our future pods need to have industry standard inputs and outputs. Instead of ports like USB, suppose that apps, services and tools integrated with infrastructure pods using industry standard APIs such as OpenStack APIs, or Cloud Foundry or Docker APIs. Now you get the lower total cost...

Read More

Cloud-Scale Management “Likes” Social Media

Managing IT resources across complex enterprises has historically been challenging, and those challenges are only increasing in complexity. We’re adding more variables in terms of clouds, automated processes, and people, to name a few. Consider the steady growth of Internet of Things (IoT) and for many organizations, their management complexity can potentially expand by several orders of magnitude. If that’s not bad enough, there’s one variable that IT has never been able to historically control – people. People come and go and often play by their own rules. When it comes to managing an enterprise, we can no longer assume that people will conform to defined enterprise management standards. Instead, IT operations must conform its standards to the customers it serves. That is why going forward, social media can be an effective tool to bridge the gap between traditional management tools and processes, and more collaborative work styles. Some of you may be envisioning the scenario below, but there are serious and significant use cases for deep social integration into enterprise management.   Consider a typical problem that I hear frequently from our clients – if scheduled maintenance will impact specific application instances (VMs, containers, etc.), how does IT operations notify prospective application owners – or simply members of the organization that care about a particular application or service? That problem may sound easy on the surface, but for many organizations it has long been a struggle. The original application owner may have left the company and it may not be clear who cares about a particular application or service. Experience has already shown that mass emails are rarely effective. This is where social media can bridge the gap. Consider the following workflow.   As employees and contractors come and go, the social platform can be quickly updated to reflect ownership and interest changes. Social streams can also be monitored to spot potential bugs, performance issues, or pending performance spikes. For example, recently I had met with a client that was building a solution to monitor social streams and news feeds to predict the load on their trading systems and proactively expand capacity before the inevitable performance spike arrives. There are many advantages to enhancing cloud management with a social fabric, including: Cutting down noise: Key stakeholders of any application or service can follow objects relevant to that service (such as VMs, physical hosts, networks, containers, etc.). Instead of getting inundated with notification emails that may not apply to them. Aggregation of notifications: Associated group notifications can be aggregated into a single Socialcast post. That single post can include information pulled or pushed from a variety of management tools such as vCenter Server, vRealize Operations and...

Read More

VMworld Session Replay: Software Defined Data Center through Hyper Converged Infrastructure

Last week I discussed the benefits of hyper-converged infrastructure in this post on the VMware CTO blog. In short, we spend far too much time building and maintaining commodity IT services. To many IT decision makers, commodity means non-differentiating, which equates to the services that every organization in the world must deploy and maintain. That includes tasks like server, storage, network and security provisioning and maintenance. The thought is simple;  if you can get commodity services delivered in a software/hardware stack that’s maintained by a vendor, opex costs can be dramatically reduced and IT operations teams are freed to focus on what really matters. That affords more time for improving the agility and efficiency of critical business applications. The security administrator that previously spent too much time manually writing firewall rules now has more time to research emerging threats and associated response methodologies. When you let go of building things that are just basic building block requirements for any application, there is so much more time for true innovation. If you could get your core data center infrastructure as an iOS-like experience – simple deployment and updates, choice of carriers (or vendors), and an ecosystem of “apps” that provides simple turnkey integrations, why wouldn’t you? Consider all of the time that you could spend driving business innovation instead of maintaining the same non-differentiating infrastructure as all of your competitors. Take a look at the VMworld session below that I co-presented with Mornay Van Der Walt for a full look at our strategy and current portfolio in the hyper-converged...

Read More

My VMworld 2014 Sessions

I’m pretty busy at this year’s VMworld North America conference. If you would like to drop by one of my sessions, here they are: SDDC3245-S – Software-Defined Data Center through Hyper-Converged Infrastructure Co-presented with Mornay Van Der Walt Monday, Aug 25, 2:00 PM – 3:00 PM The Software-Defined Data Center is the indisputable future of IT. The question for you then becomes how to get your company and IT organization there and where do you start. Key consideration factors include choice, flexibility, time to value, ongoing maintenance, ease of use and budget, amongst others. With these in mind, and understanding that there is no single “one size fits all” solution, VMware offers several ways to get you to the Software-Defined Data Center—from cloud computing reference architecture working in conjunction with our partners, to our joint partnership in VCE, to new solutions based on our Virtual SAN technology. In this session, Chris Wolf, our Americas Chief Technology Officer, and Mornay Van Der Walt, a Vice President in R&D, will dive deeper into the latter and discuss new solutions based on Virtual SAN that will transform the end-to-end user experience as you know it today—from initial purchase to deployment to ongoing maintenance and support. OPT2668 – DevOps Demystified! Proven Architectures to Support DevOps Initiatives Co-presented with Aaron Sweemer Tuesday, Aug 26, 3:30 PM – 4:30 PM Wednesday, Aug 27, 3:30 PM – 4:30 PM DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common developer tools and everything needed to fully support DevOps initiatives using VMware technologies. SDDC3350 – VMware and Docker – Better Together Co-presented with Ben Golub Tuesday, Aug 26, 12:30 PM – 1:30 PM Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions...

Read More

Debunking Cloud IaaS Mobility Myths

Many things in life appear great on the surface, but wisdom has taught us to never trust a book by its cover or believe in silver bullets. The latest story I frequently hear being pitched in IT circles is that of cloud IaaS Utopia. In this universe, workloads can simply move anywhere (between disparate providers and private data centers, for example) without consequence. Typically you’ll hear a few data points to justify the story, including: We use OpenStack, therefore your workloads can be run anywhere We use an open source hypervisor, so you can run your workloads anywhere We support Open Virtualization Format (OVF) import and export, so you won’t have any portability concerns We have a VM import and export tool, so you can easily add and remove VMs The last three points are mainly VM-centric, so let’s begin there. When it comes to workload mobility, the VM has always been the easy part. Don’t get me wrong, VM import tools do a great job of dynamically removing the right device drivers and associated software, and properly preparing a VM disk for a new hypervisor; however, that is rarely a challenge today. OVF takes VM import and export beyond a simple vendor or provider tool and OVF’s extensibility allows a VM or an aggregate of multiple VMs to specify its management, security, compliance, and licensing requirements to the provider or management platform. Moving Beyond the Checkboxes So far, the openness story sounds legit, but holes often quickly emerge when you try to operationalize any new workload in a new environment. Operational issues such as orchestration, backup, disaster recovery (DR), security, identity and several requisite management tasks (such as performance, capacity and configuration management) ultimately impede true workload mobility. Here is a list of considerations: Third party integration: It’s critical to understand how a third party solution is supported rather than just the fact that it is supported. Integration can occur at various parts of the stack (such as at the hypervisor management layer APIs instead of at a higher orchestration layer), meaning that moving to a new hypervisor could require product replacement or additional integration work and QA. It’s also important to understand how features are exposed through a platform’s API set (such as open source vs. proprietary APIs). Multiple API integrations may be required to operate the workload in a hybrid cloud. Third party software licensing: Can software that manages or runs in the VM have its licenses follow the VM to new infrastructure, or is a new procurement cycle required? Vendor ecosystem: Are all of your preferred vendors supported and provide rich integrations for the hybrid cloud environment? How easy is it to find details on third party...

Read More

What IT Operations Can (and Should) Learn from the Electronics Industry

  The state of New Jersey is home to some of the most significant electronics inventions in our history, including countless inventions by Thomas Edison and what became the modern transistor. Bell Labs ushered in a sustained period of innovation and along with it a robust and growing workforce. My own technology career also had started in the electronics industry, where as a young US Marine I did component level repair (such as troubleshooting and repairing transistors and integrated circuits on electronic circuit boards). While such specializations were important at the time, today they are mostly irrelevant. In New Jersey I continually run into folks who used to have solid careers as electronics technicians and most of them are no longer doing any work related to technology. The reason – their skill is no longer important, or the number of skilled professionals is far greater than the available jobs. That happened because lower costs and agility requirements (such as faster repairs and high uptime requirements) made their specializations no longer practical. We’re seeing those same requirements drive public, private and hybrid cloud computing initiatives today. If you’re wondering what any of this has to do with IT operations, consider the role of the typical IT ops professional. He or she often must deploy, optimize, troubleshoot, and remediate a variety of infrastructure-related services. At the same time, agility demands are mandating substantial investments in automation. As I’ve said before, the key to automating systems economically is to remove as many variables as possible. To the IT ops professional, that will ultimately shrink the demand for high degrees of customization. We’re moving to a world where IT will be consumed as a series of modular components and integrated systems. IT value will increasingly be determined by cost, agility, and availability; successful IT professionals will worry less about minutia operational details and simply leave that up to vendors and trusted partners. You could go to Radio Shack, buy a bunch of pieces and parts, and build a radio, but why would you? You can go to a department store and buy one for a lower price that will likely last for years. The notion of building a radio from scratch (outside of a learning exercise) is laughable. Ultimately I foresee the same fate for IT infrastructure systems. Building highly automated infrastructure systems from scratch will be viewed as an academic exercise.  Value will shift from building integrated systems to integrating systems with key business processes. In the end, there will be plenty of work for all of us, but we must evolve as IT professionals or risk being left behind. To do that, we need to lose the...

Read More

Complexity is Great for Profits… Just Not Your Profits

Many people who have seen me present over the past year have heard me discuss the notion of complexity and who truly benefits from it. Given the current state of IT budgets and resources, the time is ripe to take a closer look at this issue. Most organizations that I work with are grappling with mandates to be more efficient and responsive to business or department needs (said in another way, more agile) and to improve application availability, all while maintaining a flat budget. These mandates often lead to public, private and hybrid cloud initiatives that include an emphasis on high degrees of automation. What is the Goal? A typical first step on the private/hybrid cloud journey is to look at the innovators in the public cloud for inspiration and then either adopt those solutions and/or work to build an internal private or hybrid cloud. This is where the problem often starts. Look at the major public cloud IaaS providers and you will notice that they all share the same common architectural tenets: A single x86 hypervisor platform and common virtual infrastructure layer A single purpose-built cloud management platform A stack whose value is mostly derived from software and not hardware Now consider how those principles compare to some of the private cloud architectural proposals you’ve probably seen. Over the past two years, I have seen dozens of private cloud architectures, and I place most in the Frankencloud category. Frankenclouds are often built from a collection of parts using a somewhat democratic process and many either fail completely or come at too high a cost. Let me explain. From a political perspective, it’s far easier to allow each IT group to keep working with their favorite technologies than to standardize on one set of integrated components. Multisourcing is often encouraged as a means to potentially save on acquisition costs. So while a private cloud project may begin with a goal of emulating successful public cloud IaaS implementations, the resulting architecture may look nothing like it. Common attributes can include: A multi-hypervisor stack with separate management silos One or more cloud management platforms that orchestrate across a variety of virtualization, storage, networking and security components A stack that has both hardware and software optimized components If multisourcing throughout the cloud IaaS stack is such a good thing, then why isn’t it pervasive in the public cloud? The reason is simple. It’s not. That said, enterprises are often encouraged to multisource virtualization, storage, networking and compute infrastructures, among other layers. The reason why: Complexity is great for profits! Many traditional vendor and consulting practices have business models that depend on high degrees of complexity and the professional services...

Read More
content top