content top

Debunking Cloud IaaS Mobility Myths

Many things in life appear great on the surface, but wisdom has taught us to never trust a book by its cover or believe in silver bullets. The latest story I frequently hear being pitched in IT circles is that of cloud IaaS Utopia. In this universe, workloads can simply move anywhere (between disparate providers and private data centers, for example) without consequence. Typically you’ll hear a few data points to justify the story, including: We use OpenStack, therefore your workloads can be run anywhere We use an open source hypervisor, so you can run your workloads anywhere We support Open Virtualization Format (OVF) import and export, so you won’t have any portability concerns We have a VM import and export tool, so you can easily add and remove VMs The last three points are mainly VM-centric, so let’s begin there. When it comes to workload mobility, the VM has always been the easy part. Don’t get me wrong, VM import tools do a great job of dynamically removing the right device drivers and associated software, and properly preparing a VM disk for a new hypervisor; however, that is rarely a challenge today. OVF takes VM import and export beyond a simple vendor or provider tool and OVF’s extensibility allows a VM or an aggregate of multiple VMs to specify its management, security, compliance, and licensing requirements to the provider or management platform. Moving Beyond the Checkboxes So far, the openness story sounds legit, but holes often quickly emerge when you try to operationalize any new workload in a new environment. Operational issues such as orchestration, backup, disaster recovery (DR), security, identity and several requisite management tasks (such as performance, capacity and configuration management) ultimately impede true workload mobility. Here is a list of considerations: Third party integration: It’s critical to understand how a third party solution is supported rather than just the fact that it is supported. Integration can occur at various parts of the stack (such as at the hypervisor management layer APIs instead of at a higher orchestration layer), meaning that moving to a new hypervisor could require product replacement or additional integration work and QA. It’s also important to understand how features are exposed through a platform’s API set (such as open source vs. proprietary APIs). Multiple API integrations may be required to operate the workload in a hybrid cloud. Third party software licensing: Can software that manages or runs in the VM have its licenses follow the VM to new infrastructure, or is a new procurement cycle required? Vendor ecosystem: Are all of your preferred vendors supported and provide rich integrations for the hybrid cloud environment? How easy is it to find details on third party...

Read More

What IT Operations Can (and Should) Learn from the Electronics Industry

  The state of New Jersey is home to some of the most significant electronics inventions in our history, including countless inventions by Thomas Edison and what became the modern transistor. Bell Labs ushered in a sustained period of innovation and along with it a robust and growing workforce. My own technology career also had started in the electronics industry, where as a young US Marine I did component level repair (such as troubleshooting and repairing transistors and integrated circuits on electronic circuit boards). While such specializations were important at the time, today they are mostly irrelevant. In New Jersey I continually run into folks who used to have solid careers as electronics technicians and most of them are no longer doing any work related to technology. The reason – their skill is no longer important, or the number of skilled professionals is far greater than the available jobs. That happened because lower costs and agility requirements (such as faster repairs and high uptime requirements) made their specializations no longer practical. We’re seeing those same requirements drive public, private and hybrid cloud computing initiatives today. If you’re wondering what any of this has to do with IT operations, consider the role of the typical IT ops professional. He or she often must deploy, optimize, troubleshoot, and remediate a variety of infrastructure-related services. At the same time, agility demands are mandating substantial investments in automation. As I’ve said before, the key to automating systems economically is to remove as many variables as possible. To the IT ops professional, that will ultimately shrink the demand for high degrees of customization. We’re moving to a world where IT will be consumed as a series of modular components and integrated systems. IT value will increasingly be determined by cost, agility, and availability; successful IT professionals will worry less about minutia operational details and simply leave that up to vendors and trusted partners. You could go to Radio Shack, buy a bunch of pieces and parts, and build a radio, but why would you? You can go to a department store and buy one for a lower price that will likely last for years. The notion of building a radio from scratch (outside of a learning exercise) is laughable. Ultimately I foresee the same fate for IT infrastructure systems. Building highly automated infrastructure systems from scratch will be viewed as an academic exercise.  Value will shift from building integrated systems to integrating systems with key business processes. In the end, there will be plenty of work for all of us, but we must evolve as IT professionals or risk being left behind. To do that, we need to lose the...

Read More

Complexity is Great for Profits… Just Not Your Profits

Many people who have seen me present over the past year have heard me discuss the notion of complexity and who truly benefits from it. Given the current state of IT budgets and resources, the time is ripe to take a closer look at this issue. Most organizations that I work with are grappling with mandates to be more efficient and responsive to business or department needs (said in another way, more agile) and to improve application availability, all while maintaining a flat budget. These mandates often lead to public, private and hybrid cloud initiatives that include an emphasis on high degrees of automation. What is the Goal? A typical first step on the private/hybrid cloud journey is to look at the innovators in the public cloud for inspiration and then either adopt those solutions and/or work to build an internal private or hybrid cloud. This is where the problem often starts. Look at the major public cloud IaaS providers and you will notice that they all share the same common architectural tenets: A single x86 hypervisor platform and common virtual infrastructure layer A single purpose-built cloud management platform A stack whose value is mostly derived from software and not hardware Now consider how those principles compare to some of the private cloud architectural proposals you’ve probably seen. Over the past two years, I have seen dozens of private cloud architectures, and I place most in the Frankencloud category. Frankenclouds are often built from a collection of parts using a somewhat democratic process and many either fail completely or come at too high a cost. Let me explain. From a political perspective, it’s far easier to allow each IT group to keep working with their favorite technologies than to standardize on one set of integrated components. Multisourcing is often encouraged as a means to potentially save on acquisition costs. So while a private cloud project may begin with a goal of emulating successful public cloud IaaS implementations, the resulting architecture may look nothing like it. Common attributes can include: A multi-hypervisor stack with separate management silos One or more cloud management platforms that orchestrate across a variety of virtualization, storage, networking and security components A stack that has both hardware and software optimized components If multisourcing throughout the cloud IaaS stack is such a good thing, then why isn’t it pervasive in the public cloud? The reason is simple. It’s not. That said, enterprises are often encouraged to multisource virtualization, storage, networking and compute infrastructures, among other layers. The reason why: Complexity is great for profits! Many traditional vendor and consulting practices have business models that depend on high degrees of complexity and the professional services...

Read More

Why I Decided to Join VMware

When you have the perfect job, it’s not easy to walk away. I had spent the past seven years as an analyst for Burton Group and then at Gartner. In my role as an analyst, I was able to work with hundreds of end user organizations and help them with their virtualization, private cloud, and desktop transformation strategies and architectures. I was also able to take feedback from the field and work with a variety of vendors to help them shape their future innovations and product roadmaps. At Gartner, I only had to pick one side – that of the end user, and I relished playing the advocate role. I always thought that it would take the absolute perfect opportunity for me to leave Gartner, and I strongly believe VMware has provided it. In my role as CTO, Americas I will be continuing to do many of the things I loved at Gartner. I’ll be even more active in social media and community engagement, and I’ll be working closely with VMware customers across the Americas on their current and future cloud, mobile and virtualization strategies. Unlike my role at Gartner, I’ll now have a direct conduit into VMware’s talented product teams to ensure that community needs are being met and often exceeded. Sure, I could have taken on a similar role at other vendors, so why VMware? There are several reasons. This is Just the Beginning Yes – VMware pioneered x86 virtualization and VMware’s success and market dominance in the virtualization space are without question. Some have wondered if VMware’s best days are in the past, but I don’t think that’s even remotely the case. Turn back the clock 15 years and when VMware was building its flagship platform, many thought it was a gimmick with limited use cases. Most of the industry didn’t foresee that VMware would fundamentally reshape the enterprise data center like it has. If you look at the work that VMware has done with the software-defined data center (SDDC), it’s easy to see that industry skepticism is back. VMware ESX quickly became a no-brainer business decision because the server consolidation benefits it provided (not to mention the flexibility afforded by vMotion and DR simplicity). With SDDC, we’re beyond servers – we’re now talking data centers. At full maturity, the SDDC will do for data center consolidation what ESX did for server consolidation, and once again the ROI benefits will be obvious. One could argue that the cloud era will also accelerate data center consolidation, and that’s true. However, when you consider the vCloud Hybrid Service (vCHS) and the massive vCloud service provider network, VMware is well-positioned to offer the most...

Read More

VMworld 2013: Will VMware Regain its Voice?

  Think of one of your favorite bands. Odds are that when they first hit the scene they were brash, unapologetic, and reached stardom at an unthinkable pace. Then what happened? If they’re like some of my favorite bands, they got rich, “matured,” and lost touch with what got them to reach their early success. They then spent their remaining days playing their early hits to a devoted audience. Or they break up. Remind you of VMware, or other disruptive technology vendors? I ask because here we are a week before VMworld, and I’m wondering if the predictable VMware will show up – you know the one that plays the hits and caters to its base – or will we see something brasher? My money is on the older, richer, more conservative VMware. Wearing my customer hat, I’d love to be wrong. Ten years ago VMware didn’t care who it offended. Along the way server hardware vendors had no choice but to partner with them even though VMware was screaming from the rooftops “With us, you’ll need less servers!” Now think about VMware’s 2013 push around the software-defined data center (SDDC). You know what word isn’t in SDDC? Hardware. If VMware wants to really get the SDDC to take off, it needs to rediscover its inner rebellious teenager – the one that got it to where it is in the first place. Consider successful public cloud service providers such as AWS. Amazon’s stack places a premium on software and sees hardware as a commodity. Yet VMware is pushing a software-defined data center mostly on top of enterprise-grade hardware from its partners. How do you get to be cost competitive with AWS when you place a premium in the entire stack while Amazon only places a premium in software? You don’t. And if VMware and its partners believe it’s possible, they’re fooling themselves. Take a look at the VMworld 2013 Global Diamond Partners. They have one thing in common (Hint: It starts with “hard” and ends with “ware”). So in the end, the graduation party for the SDDC is primarily sponsored by hardware vendors. Don’t get me wrong. I’m not saying that you can get rid of the enterprise hardware in your data centers – certainly not yet. But there is increasingly less of a need to build a virtual and physical infrastructure around the greatest common denominator – the tier 1 workload. That’s great for the vendors but not so great for your bottom line. Down the road I expect several of our clients to look at alternative lower cost technologies for less critical workloads. VMware needs to look at offerings with lower price points...

Read More

VMware Will be a Public Cloud IaaS Provider

Let’s face it. Sometimes being an “enabler,” is admirable. However, if you’ve seen an episode of Intervention lately, being an enabler is not always a good thing. VMware’s IaaS strategy was to enable its partners to offer vCloud services and give it’s customers near unlimited (>9,500 partners) choice of cloud providers. There was a big issue with this strategy – it assumed that VMware’s cloud partners would be A-OK with allowing customers to come and go. At the end of the day, that didn’t meet VMware’s provider partners business model. No one wants to race to the bottom of a commodity market and providers rightfully should be concerned with their ability to differentiate with competitors and show value while sharing a common VMware stack. Today’s news shouldn’t come as too much of a surprise. Nearly two years ago I blogged that this day would eventually come. The market would force VMware to be a provider, and it has. Forget about the talk of “open.” At the end of the day, every vendor and provider is in the business of doing whatever possible to lock customers in and make it tough for them to leave. Providers have always wanted high degrees of extensibility so that they can add value to a cloud offering and in the end offer enough customized services to make it tough for customers to leave. If we look at today’s IaaS announcement, VMware is trying to have greater control of the “choice” it’s customers get. Choice will mean a VMware-hosted offering that in theory will make it easy for customers to move VMware-based workloads in and out of the pubic cloud. The aim is an “inside-out” approach where workloads between a private data center and a public cloud operate seamlessly. The trick here, however, is how important mobility and choice will be to customers. Workloads that go straight to cloud and have few traditional enterprise management needs can go to any cloud. Front end web servers are a great example – static data, built to horizontally scale, and no backup requirements. VMware’s challenge going forward will be to differentiate. If VMware is the “enterprise alternative” to Amazon, it better launch it’s IaaS solution with enterprise features (AWS isn’t perfect but it has tons of features that large enterprises are now taking for granted). Redundant data centers, enterprise storage, networking, backup, and security are a must. In addition, it must offer serious tools for developers; the time for VMware to show the results of its investment in Puppet Labs should be when the public IaaS offering launches. Otherwise, Amazon and other providers will continue to win on features and the ease of experience that developers have...

Read More
content top