content top

My VMworld 2014 Sessions

I’m pretty busy at this year’s VMworld North America conference. If you would like to drop by one of my sessions, here they are: SDDC3245-S – Software-Defined Data Center through Hyper-Converged Infrastructure Co-presented with Mornay Van Der Walt Monday, Aug 25, 2:00 PM – 3:00 PM The Software-Defined Data Center is the indisputable future of IT. The question for you then becomes how to get your company and IT organization there and where do you start. Key consideration factors include choice, flexibility, time to value, ongoing maintenance, ease of use and budget, amongst others. With these in mind, and understanding that there is no single “one size fits all” solution, VMware offers several ways to get you to the Software-Defined Data Center—from cloud computing reference architecture working in conjunction with our partners, to our joint partnership in VCE, to new solutions based on our Virtual SAN technology. In this session, Chris Wolf, our Americas Chief Technology Officer, and Mornay Van Der Walt, a Vice President in R&D, will dive deeper into the latter and discuss new solutions based on Virtual SAN that will transform the end-to-end user experience as you know it today—from initial purchase to deployment to ongoing maintenance and support. OPT2668 – DevOps Demystified! Proven Architectures to Support DevOps Initiatives Co-presented with Aaron Sweemer Tuesday, Aug 26, 3:30 PM – 4:30 PM Wednesday, Aug 27, 3:30 PM – 4:30 PM DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common developer tools and everything needed to fully support DevOps initiatives using VMware technologies. SDDC3350 – VMware and Docker – Better Together Co-presented with Ben Golub Tuesday, Aug 26, 12:30 PM – 1:30 PM Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions...

Read More

Debunking Cloud IaaS Mobility Myths

Many things in life appear great on the surface, but wisdom has taught us to never trust a book by its cover or believe in silver bullets. The latest story I frequently hear being pitched in IT circles is that of cloud IaaS Utopia. In this universe, workloads can simply move anywhere (between disparate providers and private data centers, for example) without consequence. Typically you’ll hear a few data points to justify the story, including: We use OpenStack, therefore your workloads can be run anywhere We use an open source hypervisor, so you can run your workloads anywhere We support Open Virtualization Format (OVF) import and export, so you won’t have any portability concerns We have a VM import and export tool, so you can easily add and remove VMs The last three points are mainly VM-centric, so let’s begin there. When it comes to workload mobility, the VM has always been the easy part. Don’t get me wrong, VM import tools do a great job of dynamically removing the right device drivers and associated software, and properly preparing a VM disk for a new hypervisor; however, that is rarely a challenge today. OVF takes VM import and export beyond a simple vendor or provider tool and OVF’s extensibility allows a VM or an aggregate of multiple VMs to specify its management, security, compliance, and licensing requirements to the provider or management platform. Moving Beyond the Checkboxes So far, the openness story sounds legit, but holes often quickly emerge when you try to operationalize any new workload in a new environment. Operational issues such as orchestration, backup, disaster recovery (DR), security, identity and several requisite management tasks (such as performance, capacity and configuration management) ultimately impede true workload mobility. Here is a list of considerations: Third party integration: It’s critical to understand how a third party solution is supported rather than just the fact that it is supported. Integration can occur at various parts of the stack (such as at the hypervisor management layer APIs instead of at a higher orchestration layer), meaning that moving to a new hypervisor could require product replacement or additional integration work and QA. It’s also important to understand how features are exposed through a platform’s API set (such as open source vs. proprietary APIs). Multiple API integrations may be required to operate the workload in a hybrid cloud. Third party software licensing: Can software that manages or runs in the VM have its licenses follow the VM to new infrastructure, or is a new procurement cycle required? Vendor ecosystem: Are all of your preferred vendors supported and provide rich integrations for the hybrid cloud environment? How easy is it to find details on third party...

Read More

What IT Operations Can (and Should) Learn from the Electronics Industry

  The state of New Jersey is home to some of the most significant electronics inventions in our history, including countless inventions by Thomas Edison and what became the modern transistor. Bell Labs ushered in a sustained period of innovation and along with it a robust and growing workforce. My own technology career also had started in the electronics industry, where as a young US Marine I did component level repair (such as troubleshooting and repairing transistors and integrated circuits on electronic circuit boards). While such specializations were important at the time, today they are mostly irrelevant. In New Jersey I continually run into folks who used to have solid careers as electronics technicians and most of them are no longer doing any work related to technology. The reason – their skill is no longer important, or the number of skilled professionals is far greater than the available jobs. That happened because lower costs and agility requirements (such as faster repairs and high uptime requirements) made their specializations no longer practical. We’re seeing those same requirements drive public, private and hybrid cloud computing initiatives today. If you’re wondering what any of this has to do with IT operations, consider the role of the typical IT ops professional. He or she often must deploy, optimize, troubleshoot, and remediate a variety of infrastructure-related services. At the same time, agility demands are mandating substantial investments in automation. As I’ve said before, the key to automating systems economically is to remove as many variables as possible. To the IT ops professional, that will ultimately shrink the demand for high degrees of customization. We’re moving to a world where IT will be consumed as a series of modular components and integrated systems. IT value will increasingly be determined by cost, agility, and availability; successful IT professionals will worry less about minutia operational details and simply leave that up to vendors and trusted partners. You could go to Radio Shack, buy a bunch of pieces and parts, and build a radio, but why would you? You can go to a department store and buy one for a lower price that will likely last for years. The notion of building a radio from scratch (outside of a learning exercise) is laughable. Ultimately I foresee the same fate for IT infrastructure systems. Building highly automated infrastructure systems from scratch will be viewed as an academic exercise.  Value will shift from building integrated systems to integrating systems with key business processes. In the end, there will be plenty of work for all of us, but we must evolve as IT professionals or risk being left behind. To do that, we need to lose the...

Read More

Complexity is Great for Profits… Just Not Your Profits

Many people who have seen me present over the past year have heard me discuss the notion of complexity and who truly benefits from it. Given the current state of IT budgets and resources, the time is ripe to take a closer look at this issue. Most organizations that I work with are grappling with mandates to be more efficient and responsive to business or department needs (said in another way, more agile) and to improve application availability, all while maintaining a flat budget. These mandates often lead to public, private and hybrid cloud initiatives that include an emphasis on high degrees of automation. What is the Goal? A typical first step on the private/hybrid cloud journey is to look at the innovators in the public cloud for inspiration and then either adopt those solutions and/or work to build an internal private or hybrid cloud. This is where the problem often starts. Look at the major public cloud IaaS providers and you will notice that they all share the same common architectural tenets: A single x86 hypervisor platform and common virtual infrastructure layer A single purpose-built cloud management platform A stack whose value is mostly derived from software and not hardware Now consider how those principles compare to some of the private cloud architectural proposals you’ve probably seen. Over the past two years, I have seen dozens of private cloud architectures, and I place most in the Frankencloud category. Frankenclouds are often built from a collection of parts using a somewhat democratic process and many either fail completely or come at too high a cost. Let me explain. From a political perspective, it’s far easier to allow each IT group to keep working with their favorite technologies than to standardize on one set of integrated components. Multisourcing is often encouraged as a means to potentially save on acquisition costs. So while a private cloud project may begin with a goal of emulating successful public cloud IaaS implementations, the resulting architecture may look nothing like it. Common attributes can include: A multi-hypervisor stack with separate management silos One or more cloud management platforms that orchestrate across a variety of virtualization, storage, networking and security components A stack that has both hardware and software optimized components If multisourcing throughout the cloud IaaS stack is such a good thing, then why isn’t it pervasive in the public cloud? The reason is simple. It’s not. That said, enterprises are often encouraged to multisource virtualization, storage, networking and compute infrastructures, among other layers. The reason why: Complexity is great for profits! Many traditional vendor and consulting practices have business models that depend on high degrees of complexity and the professional services...

Read More

Why I Decided to Join VMware

When you have the perfect job, it’s not easy to walk away. I had spent the past seven years as an analyst for Burton Group and then at Gartner. In my role as an analyst, I was able to work with hundreds of end user organizations and help them with their virtualization, private cloud, and desktop transformation strategies and architectures. I was also able to take feedback from the field and work with a variety of vendors to help them shape their future innovations and product roadmaps. At Gartner, I only had to pick one side – that of the end user, and I relished playing the advocate role. I always thought that it would take the absolute perfect opportunity for me to leave Gartner, and I strongly believe VMware has provided it. In my role as CTO, Americas I will be continuing to do many of the things I loved at Gartner. I’ll be even more active in social media and community engagement, and I’ll be working closely with VMware customers across the Americas on their current and future cloud, mobile and virtualization strategies. Unlike my role at Gartner, I’ll now have a direct conduit into VMware’s talented product teams to ensure that community needs are being met and often exceeded. Sure, I could have taken on a similar role at other vendors, so why VMware? There are several reasons. This is Just the Beginning Yes – VMware pioneered x86 virtualization and VMware’s success and market dominance in the virtualization space are without question. Some have wondered if VMware’s best days are in the past, but I don’t think that’s even remotely the case. Turn back the clock 15 years and when VMware was building its flagship platform, many thought it was a gimmick with limited use cases. Most of the industry didn’t foresee that VMware would fundamentally reshape the enterprise data center like it has. If you look at the work that VMware has done with the software-defined data center (SDDC), it’s easy to see that industry skepticism is back. VMware ESX quickly became a no-brainer business decision because the server consolidation benefits it provided (not to mention the flexibility afforded by vMotion and DR simplicity). With SDDC, we’re beyond servers – we’re now talking data centers. At full maturity, the SDDC will do for data center consolidation what ESX did for server consolidation, and once again the ROI benefits will be obvious. One could argue that the cloud era will also accelerate data center consolidation, and that’s true. However, when you consider the vCloud Hybrid Service (vCHS) and the massive vCloud service provider network, VMware is well-positioned to offer the most...

Read More
content top