content top

VMworld 2015 – Day 2: The Dawn of People-Centric Computing

The notion of a people-centric, instead of a device-centric, application and content delivery model has been around for a very long time. On the surface, people-centric computing is a simple concept—application and content delivery should be a matter of connecting them to people and not to devices. In my past life at Gartner, I had devoted several research reports to a topic that always seemed just out of grasp from reality. Reality for most enterprises involves myriad of tools, groups, and processes to deliver Windows, Mac, web, and mobile applications. Many IT leaders that I work with would love to have a single team, set of tools, and processes for delivering applications and content, regardless of application or device type. Oftentimes, there are at least two distinct management silos in each organization: the Windows desktop team and the mobile team. Policy enforcement is often inconsistent across teams due to tool disparities used by each. For some, web application provisioning and management is still the Wild West, with no central governance or identity management. Read the full post on VMware...

Read More

VMworld 2015: The End of the Beginning — Let’s Go!

VMworld 2015 kicked off today with a massive set of announcements. Each year, our talented group of engineers get to take part in the ultimate show-and-tell, presenting and getting feedback on a myriad of new products and features. In addition, new ideas are shared and deep discussions are occurring everywhere you look. VMware has been on a path to deliver the software-defined data center (SDDC) for a number of years, with former CTO Steve Herrod first introducing the concept at Interop 2012. SDDC isn’t just a VMware concept – it’s an industry goal. Software-defined compute, networking, storage and security are core tenets of many public cloud architectures. The difference with VMware is that our SDDC components are multi-data center and multi-cloud by design, allowing the same programmatic API-driven software-defined infrastructure services to be available across multiple clouds, branch offices, and private data centers. This is all key to our One Cloud philosophy that Carl Eschenbach articulated in this morning’s keynote. Nearly all enterprises will manage relationships with numerous cloud providers; however, they can count on VMware to enable the flexibility of simple deployment to a large choice of service providers or data centers, all while using their same tools and processes. Read the full post on VMware...

Read More

What a Motherboard Can Tell Us About the Future of Data Center Infrastructure

What’s the correct answer to this question? Where did Intel decide to put the memory sockets on its latest motherboard – the 4-socket S4600LH2? If you’re curious, you can see where the memory sockets are located here, but that’s not the correct answer. The right answer is: Who Cares?!!! Long ago, we all decided that Intel was smart enough to figure out where it’s best to place components on its boards. While hardware partners may have some concerns, most end-user organizations couldn’t care less. We care about things like cost and SLA, but the motherboard hardware architecture is not that interesting. That said, when you buy a server from Intel, are you paralyzed with this overwhelming fear of lock-in? Odds are that the thought never even crosses your mind. Our x86 servers, desktops, laptops, etc. all have industry standard inputs and outputs like USB, PCIe, and Ethernet. So we can buy a server, integrate the appropriate add-ons using PCIe sockets or over the network, and we’re good to go. Now consider the role of data center infrastructure today. It’s easy to argue that API-driven, programmatic software-defined infrastructure is quickly becoming “table stakes” and a requirement to simply stay in business. Every business in the world will reach a point where agile infrastructure is no longer a differentiation but rather an expectation, whether hosted locally or as a cloud service. So if everyone just needs it, what’s the business value of building custom infrastructure solutions? It’s great for IT services companies, but I’d argue that it’s not great for your organization. Rather than spend time on projects that don’t differentiate your business, shift your teams to work on technology that provides a competitive advantage. How an application or service connects to infrastructure and how infrastructure interconnects will become less and less interesting in the future, especially for all core commodity infrastructure services that are universal among every IT organization and aren’t a differentiator (e.g., basic compute, storage, network and security management). So instead of buying servers, you’ll buy turnkey programmable infrastructure pods running integrated software stacks such as EVO:RAIL or EVO:RACK. Ultimately, the software-defined infrastructure and associated management services will have software updates delivered from the cloud, similar to how you receive consistent updates for iOS and Android mobile devices today. Now let’s go back to the earlier motherboard example. To get you comfortable with not caring about how infrastructure is interconnected, our future pods need to have industry standard inputs and outputs. Instead of ports like USB, suppose that apps, services and tools integrated with infrastructure pods using industry standard APIs such as OpenStack APIs, or Cloud Foundry or Docker APIs. Now you get the lower total cost...

Read More

Cloud-Scale Management “Likes” Social Media

Managing IT resources across complex enterprises has historically been challenging, and those challenges are only increasing in complexity. We’re adding more variables in terms of clouds, automated processes, and people, to name a few. Consider the steady growth of Internet of Things (IoT) and for many organizations, their management complexity can potentially expand by several orders of magnitude. If that’s not bad enough, there’s one variable that IT has never been able to historically control – people. People come and go and often play by their own rules. When it comes to managing an enterprise, we can no longer assume that people will conform to defined enterprise management standards. Instead, IT operations must conform its standards to the customers it serves. That is why going forward, social media can be an effective tool to bridge the gap between traditional management tools and processes, and more collaborative work styles. Some of you may be envisioning the scenario below, but there are serious and significant use cases for deep social integration into enterprise management.   Consider a typical problem that I hear frequently from our clients – if scheduled maintenance will impact specific application instances (VMs, containers, etc.), how does IT operations notify prospective application owners – or simply members of the organization that care about a particular application or service? That problem may sound easy on the surface, but for many organizations it has long been a struggle. The original application owner may have left the company and it may not be clear who cares about a particular application or service. Experience has already shown that mass emails are rarely effective. This is where social media can bridge the gap. Consider the following workflow.   As employees and contractors come and go, the social platform can be quickly updated to reflect ownership and interest changes. Social streams can also be monitored to spot potential bugs, performance issues, or pending performance spikes. For example, recently I had met with a client that was building a solution to monitor social streams and news feeds to predict the load on their trading systems and proactively expand capacity before the inevitable performance spike arrives. There are many advantages to enhancing cloud management with a social fabric, including: Cutting down noise: Key stakeholders of any application or service can follow objects relevant to that service (such as VMs, physical hosts, networks, containers, etc.). Instead of getting inundated with notification emails that may not apply to them. Aggregation of notifications: Associated group notifications can be aggregated into a single Socialcast post. That single post can include information pulled or pushed from a variety of management tools such as vCenter Server, vRealize Operations and...

Read More

VMworld Session Replay: Software Defined Data Center through Hyper Converged Infrastructure

Last week I discussed the benefits of hyper-converged infrastructure in this post on the VMware CTO blog. In short, we spend far too much time building and maintaining commodity IT services. To many IT decision makers, commodity means non-differentiating, which equates to the services that every organization in the world must deploy and maintain. That includes tasks like server, storage, network and security provisioning and maintenance. The thought is simple;  if you can get commodity services delivered in a software/hardware stack that’s maintained by a vendor, opex costs can be dramatically reduced and IT operations teams are freed to focus on what really matters. That affords more time for improving the agility and efficiency of critical business applications. The security administrator that previously spent too much time manually writing firewall rules now has more time to research emerging threats and associated response methodologies. When you let go of building things that are just basic building block requirements for any application, there is so much more time for true innovation. If you could get your core data center infrastructure as an iOS-like experience – simple deployment and updates, choice of carriers (or vendors), and an ecosystem of “apps” that provides simple turnkey integrations, why wouldn’t you? Consider all of the time that you could spend driving business innovation instead of maintaining the same non-differentiating infrastructure as all of your competitors. Take a look at the VMworld session below that I co-presented with Mornay Van Der Walt for a full look at our strategy and current portfolio in the hyper-converged...

Read More

My VMworld 2014 Sessions

I’m pretty busy at this year’s VMworld North America conference. If you would like to drop by one of my sessions, here they are: SDDC3245-S – Software-Defined Data Center through Hyper-Converged Infrastructure Co-presented with Mornay Van Der Walt Monday, Aug 25, 2:00 PM – 3:00 PM The Software-Defined Data Center is the indisputable future of IT. The question for you then becomes how to get your company and IT organization there and where do you start. Key consideration factors include choice, flexibility, time to value, ongoing maintenance, ease of use and budget, amongst others. With these in mind, and understanding that there is no single “one size fits all” solution, VMware offers several ways to get you to the Software-Defined Data Center—from cloud computing reference architecture working in conjunction with our partners, to our joint partnership in VCE, to new solutions based on our Virtual SAN technology. In this session, Chris Wolf, our Americas Chief Technology Officer, and Mornay Van Der Walt, a Vice President in R&D, will dive deeper into the latter and discuss new solutions based on Virtual SAN that will transform the end-to-end user experience as you know it today—from initial purchase to deployment to ongoing maintenance and support. OPT2668 – DevOps Demystified! Proven Architectures to Support DevOps Initiatives Co-presented with Aaron Sweemer Tuesday, Aug 26, 3:30 PM – 4:30 PM Wednesday, Aug 27, 3:30 PM – 4:30 PM DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common developer tools and everything needed to fully support DevOps initiatives using VMware technologies. SDDC3350 – VMware and Docker – Better Together Co-presented with Ben Golub Tuesday, Aug 26, 12:30 PM – 1:30 PM Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions...

Read More

Debunking Cloud IaaS Mobility Myths

Many things in life appear great on the surface, but wisdom has taught us to never trust a book by its cover or believe in silver bullets. The latest story I frequently hear being pitched in IT circles is that of cloud IaaS Utopia. In this universe, workloads can simply move anywhere (between disparate providers and private data centers, for example) without consequence. Typically you’ll hear a few data points to justify the story, including: We use OpenStack, therefore your workloads can be run anywhere We use an open source hypervisor, so you can run your workloads anywhere We support Open Virtualization Format (OVF) import and export, so you won’t have any portability concerns We have a VM import and export tool, so you can easily add and remove VMs The last three points are mainly VM-centric, so let’s begin there. When it comes to workload mobility, the VM has always been the easy part. Don’t get me wrong, VM import tools do a great job of dynamically removing the right device drivers and associated software, and properly preparing a VM disk for a new hypervisor; however, that is rarely a challenge today. OVF takes VM import and export beyond a simple vendor or provider tool and OVF’s extensibility allows a VM or an aggregate of multiple VMs to specify its management, security, compliance, and licensing requirements to the provider or management platform. Moving Beyond the Checkboxes So far, the openness story sounds legit, but holes often quickly emerge when you try to operationalize any new workload in a new environment. Operational issues such as orchestration, backup, disaster recovery (DR), security, identity and several requisite management tasks (such as performance, capacity and configuration management) ultimately impede true workload mobility. Here is a list of considerations: Third party integration: It’s critical to understand how a third party solution is supported rather than just the fact that it is supported. Integration can occur at various parts of the stack (such as at the hypervisor management layer APIs instead of at a higher orchestration layer), meaning that moving to a new hypervisor could require product replacement or additional integration work and QA. It’s also important to understand how features are exposed through a platform’s API set (such as open source vs. proprietary APIs). Multiple API integrations may be required to operate the workload in a hybrid cloud. Third party software licensing: Can software that manages or runs in the VM have its licenses follow the VM to new infrastructure, or is a new procurement cycle required? Vendor ecosystem: Are all of your preferred vendors supported and provide rich integrations for the hybrid cloud environment? How easy is it to find details on third party...

Read More
content top