content top

What IT Operations Can (and Should) Learn from the Electronics Industry

  The state of New Jersey is home to some of the most significant electronics inventions in our history, including countless inventions by Thomas Edison and what became the modern transistor. Bell Labs ushered in a sustained period of innovation and along with it a robust and growing workforce. My own technology career also had started in the electronics industry, where as a young US Marine I did component level repair (such as troubleshooting and repairing transistors and integrated circuits on electronic circuit boards). While such specializations were important at the time, today they are mostly irrelevant. In New Jersey I continually run into folks who used to have solid careers as electronics technicians and most of them are no longer doing any work related to technology. The reason – their skill is no longer important, or the number of skilled professionals is far greater than the available jobs. That happened because lower costs and agility requirements (such as faster repairs and high uptime requirements) made their specializations no longer practical. We’re seeing those same requirements drive public, private and hybrid cloud computing initiatives today. If you’re wondering what any of this has to do with IT operations, consider the role of the typical IT ops professional. He or she often must deploy, optimize, troubleshoot, and remediate a variety of infrastructure-related services. At the same time, agility demands are mandating substantial investments in automation. As I’ve said before, the key to automating systems economically is to remove as many variables as possible. To the IT ops professional, that will ultimately shrink the demand for high degrees of customization. We’re moving to a world where IT will be consumed as a series of modular components and integrated systems. IT value will increasingly be determined by cost, agility, and availability; successful IT professionals will worry less about minutia operational details and simply leave that up to vendors and trusted partners. You could go to Radio Shack, buy a bunch of pieces and parts, and build a radio, but why would you? You can go to a department store and buy one for a lower price that will likely last for years. The notion of building a radio from scratch (outside of a learning exercise) is laughable. Ultimately I foresee the same fate for IT infrastructure systems. Building highly automated infrastructure systems from scratch will be viewed as an academic exercise.  Value will shift from building integrated systems to integrating systems with key business processes. In the end, there will be plenty of work for all of us, but we must evolve as IT professionals or risk being left behind. To do that, we need to lose the...

Read More

Complexity is Great for Profits… Just Not Your Profits

Many people who have seen me present over the past year have heard me discuss the notion of complexity and who truly benefits from it. Given the current state of IT budgets and resources, the time is ripe to take a closer look at this issue. Most organizations that I work with are grappling with mandates to be more efficient and responsive to business or department needs (said in another way, more agile) and to improve application availability, all while maintaining a flat budget. These mandates often lead to public, private and hybrid cloud initiatives that include an emphasis on high degrees of automation. What is the Goal? A typical first step on the private/hybrid cloud journey is to look at the innovators in the public cloud for inspiration and then either adopt those solutions and/or work to build an internal private or hybrid cloud. This is where the problem often starts. Look at the major public cloud IaaS providers and you will notice that they all share the same common architectural tenets: A single x86 hypervisor platform and common virtual infrastructure layer A single purpose-built cloud management platform A stack whose value is mostly derived from software and not hardware Now consider how those principles compare to some of the private cloud architectural proposals you’ve probably seen. Over the past two years, I have seen dozens of private cloud architectures, and I place most in the Frankencloud category. Frankenclouds are often built from a collection of parts using a somewhat democratic process and many either fail completely or come at too high a cost. Let me explain. From a political perspective, it’s far easier to allow each IT group to keep working with their favorite technologies than to standardize on one set of integrated components. Multisourcing is often encouraged as a means to potentially save on acquisition costs. So while a private cloud project may begin with a goal of emulating successful public cloud IaaS implementations, the resulting architecture may look nothing like it. Common attributes can include: A multi-hypervisor stack with separate management silos One or more cloud management platforms that orchestrate across a variety of virtualization, storage, networking and security components A stack that has both hardware and software optimized components If multisourcing throughout the cloud IaaS stack is such a good thing, then why isn’t it pervasive in the public cloud? The reason is simple. It’s not. That said, enterprises are often encouraged to multisource virtualization, storage, networking and compute infrastructures, among other layers. The reason why: Complexity is great for profits! Many traditional vendor and consulting practices have business models that depend on high degrees of complexity and the professional services...

Read More

Why I Decided to Join VMware

When you have the perfect job, it’s not easy to walk away. I had spent the past seven years as an analyst for Burton Group and then at Gartner. In my role as an analyst, I was able to work with hundreds of end user organizations and help them with their virtualization, private cloud, and desktop transformation strategies and architectures. I was also able to take feedback from the field and work with a variety of vendors to help them shape their future innovations and product roadmaps. At Gartner, I only had to pick one side – that of the end user, and I relished playing the advocate role. I always thought that it would take the absolute perfect opportunity for me to leave Gartner, and I strongly believe VMware has provided it. In my role as CTO, Americas I will be continuing to do many of the things I loved at Gartner. I’ll be even more active in social media and community engagement, and I’ll be working closely with VMware customers across the Americas on their current and future cloud, mobile and virtualization strategies. Unlike my role at Gartner, I’ll now have a direct conduit into VMware’s talented product teams to ensure that community needs are being met and often exceeded. Sure, I could have taken on a similar role at other vendors, so why VMware? There are several reasons. This is Just the Beginning Yes – VMware pioneered x86 virtualization and VMware’s success and market dominance in the virtualization space are without question. Some have wondered if VMware’s best days are in the past, but I don’t think that’s even remotely the case. Turn back the clock 15 years and when VMware was building its flagship platform, many thought it was a gimmick with limited use cases. Most of the industry didn’t foresee that VMware would fundamentally reshape the enterprise data center like it has. If you look at the work that VMware has done with the software-defined data center (SDDC), it’s easy to see that industry skepticism is back. VMware ESX quickly became a no-brainer business decision because the server consolidation benefits it provided (not to mention the flexibility afforded by vMotion and DR simplicity). With SDDC, we’re beyond servers – we’re now talking data centers. At full maturity, the SDDC will do for data center consolidation what ESX did for server consolidation, and once again the ROI benefits will be obvious. One could argue that the cloud era will also accelerate data center consolidation, and that’s true. However, when you consider the vCloud Hybrid Service (vCHS) and the massive vCloud service provider network, VMware is well-positioned to offer the most...

Read More

VMworld 2013: Will VMware Regain its Voice?

  Think of one of your favorite bands. Odds are that when they first hit the scene they were brash, unapologetic, and reached stardom at an unthinkable pace. Then what happened? If they’re like some of my favorite bands, they got rich, “matured,” and lost touch with what got them to reach their early success. They then spent their remaining days playing their early hits to a devoted audience. Or they break up. Remind you of VMware, or other disruptive technology vendors? I ask because here we are a week before VMworld, and I’m wondering if the predictable VMware will show up – you know the one that plays the hits and caters to its base – or will we see something brasher? My money is on the older, richer, more conservative VMware. Wearing my customer hat, I’d love to be wrong. Ten years ago VMware didn’t care who it offended. Along the way server hardware vendors had no choice but to partner with them even though VMware was screaming from the rooftops “With us, you’ll need less servers!” Now think about VMware’s 2013 push around the software-defined data center (SDDC). You know what word isn’t in SDDC? Hardware. If VMware wants to really get the SDDC to take off, it needs to rediscover its inner rebellious teenager – the one that got it to where it is in the first place. Consider successful public cloud service providers such as AWS. Amazon’s stack places a premium on software and sees hardware as a commodity. Yet VMware is pushing a software-defined data center mostly on top of enterprise-grade hardware from its partners. How do you get to be cost competitive with AWS when you place a premium in the entire stack while Amazon only places a premium in software? You don’t. And if VMware and its partners believe it’s possible, they’re fooling themselves. Take a look at the VMworld 2013 Global Diamond Partners. They have one thing in common (Hint: It starts with “hard” and ends with “ware”). So in the end, the graduation party for the SDDC is primarily sponsored by hardware vendors. Don’t get me wrong. I’m not saying that you can get rid of the enterprise hardware in your data centers – certainly not yet. But there is increasingly less of a need to build a virtual and physical infrastructure around the greatest common denominator – the tier 1 workload. That’s great for the vendors but not so great for your bottom line. Down the road I expect several of our clients to look at alternative lower cost technologies for less critical workloads. VMware needs to look at offerings with lower price points...

Read More

VMware Will be a Public Cloud IaaS Provider

Let’s face it. Sometimes being an “enabler,” is admirable. However, if you’ve seen an episode of Intervention lately, being an enabler is not always a good thing. VMware’s IaaS strategy was to enable its partners to offer vCloud services and give it’s customers near unlimited (>9,500 partners) choice of cloud providers. There was a big issue with this strategy – it assumed that VMware’s cloud partners would be A-OK with allowing customers to come and go. At the end of the day, that didn’t meet VMware’s provider partners business model. No one wants to race to the bottom of a commodity market and providers rightfully should be concerned with their ability to differentiate with competitors and show value while sharing a common VMware stack. Today’s news shouldn’t come as too much of a surprise. Nearly two years ago I blogged that this day would eventually come. The market would force VMware to be a provider, and it has. Forget about the talk of “open.” At the end of the day, every vendor and provider is in the business of doing whatever possible to lock customers in and make it tough for them to leave. Providers have always wanted high degrees of extensibility so that they can add value to a cloud offering and in the end offer enough customized services to make it tough for customers to leave. If we look at today’s IaaS announcement, VMware is trying to have greater control of the “choice” it’s customers get. Choice will mean a VMware-hosted offering that in theory will make it easy for customers to move VMware-based workloads in and out of the pubic cloud. The aim is an “inside-out” approach where workloads between a private data center and a public cloud operate seamlessly. The trick here, however, is how important mobility and choice will be to customers. Workloads that go straight to cloud and have few traditional enterprise management needs can go to any cloud. Front end web servers are a great example – static data, built to horizontally scale, and no backup requirements. VMware’s challenge going forward will be to differentiate. If VMware is the “enterprise alternative” to Amazon, it better launch it’s IaaS solution with enterprise features (AWS isn’t perfect but it has tons of features that large enterprises are now taking for granted). Redundant data centers, enterprise storage, networking, backup, and security are a must. In addition, it must offer serious tools for developers; the time for VMware to show the results of its investment in Puppet Labs should be when the public IaaS offering launches. Otherwise, Amazon and other providers will continue to win on features and the ease of experience that developers have...

Read More

The 90s Called: They Want Their Procurement Team Back

Today I talked to a client about their private cloud architecture and pending investments. The talk hit on a lot of areas, ranging from software licensing, to vendor support, to orchestration, and finally to standardization. When we got to the topic of standardization and procurement, they couldn’t contain themselves. One member of the organization said: We can’t even say we’re a Microsoft Exchange shop. As far as procurement is concerned, we can’t even have a standard for email. If that sounds odd to you, then consider your investments for private cloud. Providers achieve tremendous economies of scale through high degrees of standardization, yet that approach is nearly impossible for many enterprises. The reason for many are the folks in the procurement group whose job it is to save the company on capex costs. These folks have long prided themselves on getting a 15% discount in selecting one vendor product over another. Once the discounted solution is procured, then it’s the job of IT Ops to run it. If that decision results in a 30% premium on opex, then so be it. At that point the procurement group is already focused on the next purchase. It’s a story I hear a lot and in my opinion is an extremely shortsighted approach. Until procurement is retooled to place the emphasis on TCO instead of capex, I will continue to work with clients on stringing together a hodgepodge of point solutions at a ridiculously high cost. Granted, not every vertical faces this issue to the same degree, but it is especially painful in the public sector. The finger often gets pointed at IT Ops for being too costly, but the real source is ironically a group that prides themselves on saving money – the procurement group. Procurement is trying to save money in the best interest of the business, but an approach purely focused on capex often hurts the business. Cloud computing is forcing one of the greatest collective IT modernization efforts in our history. It’s time that procurement processes join us in the 21st century as well. Update: This morning (February 5th) I discussed this particular issue with a client. In their case, standardization was a mandate set at the VP level and impacts all business units. The mandate changed the role of procurement to one of standardization enforcement with the expectation of getting better volume discounts by working with fewer vendors. In addition, he mentioned the other added benefits around opex costs. The procurement team no longer looks for the best deal in terms of upfront cost. They look to check to see if a product already exists within the approved vendor set and requires the business...

Read More

Heterogeneous Virtualization Trends at Gartner Data Center

Heterogeneous virtualization has been a hot topic among clients and last week at the Gartner Data Center conference in Las Vegas I presented a session on the subject. During the session, I polled the audience on their heterogeneous virtualization plans. Fifty participants responded to each polling question. The first question I asked was about the current hypervisors that were deployed (note that the values are the number of respondents and not a percentage).                 As you can see, most participants used VMware vSphere as expected, and there was a good mix of Hyper-V, XenServer, and some RHEV and Oracle VM. It’s one thing to have multiple hypervisors, but not everyone is using multiple hypervisors to run production server applications in their data centers. That’s why I asked attendees which hypervisors they were using to run production server applications.                   Notice that the drop was pretty significant. In the first poll, 44 non-VMware hypervisors were used. In the second poll, that number dropped to 25. The drop is consistent with an important but often unreported multi-hypervisor trend – while most organizations are using multiple hypervisors, most are not using multiple hypervisors for their production server applications (Oracle VM is a common exception). The second or third hypervisors deployed within an organization are often used to support branch office or departmental deployments. The fact that the additional hypervisors are being used is important, but so is understanding the use cases. With that in mind, I also asked attendees about their plans for a single hypervisor.                   Most (57%) planned to use a single hypervisor for production server workloads that required DR, with DR simplicity being the primary driver behind that decision. Clients frequently tell me that they fear that multiple hypervisors will recreate some of the same DR challenges that they initially solved with server virtualization. In addition, the OPEX concerns are real. Clients doing heterogeneous virtualization today almost always have a separate management silo for each hypervisor. When political or geographical issues preserve IT silos, the per-hypervisor silos might not be too big of a deal. However, organizations looking to be more centralized and efficient should aim for higher degrees of standardization. Does this data mean that VMware wins? Not necessarily. I’ve had many calls with clients that are considering to switching to Hyper-V as their standard virtualization offering. That switch will take place over a 3-5 year period, with the end goal of having a homogeneous virtualization layer. If VMware is smart, it will focus on the OPEX and DR benefits...

Read More
content top