VMware has come a very long way since it first wowed developers and IT teams with the functionality of its defining product, vSphere. As the pioneer of operating system virtualisation server consolidation, the company built its reputation on making life easier for developers, while saving large companies some serious cash. While vSphere, and virtual machines in general, continue to serve as core parts of many enterprise development environments around the world, the company has evolved with the industry and has catered for and inspired industry trends of the digitally transformed business, including the switch to public, hybrid, and multi-cloud.
A big part of this shift, in recent times, has been a fundamental change in how developers are writing and orchestrating applications. While VMware has provided the platform of choice for creating and managing VMs, containers have been increasingly taken the world of enterprise IT by storm. Of course, one of the organisations leading this charge is Kubernetes, with its container orchestration platform. Both private and public entities are increasingly falling in love with the idea of managing their applications as containers using Kubernetes, as the medium provides a raft of advantages, including a more lightweight nature, faster deployment times, and increased ease of scalability.
VMware has been supporting Kubernetes for some time with the availability of the Pivotal Container Service (PKS) on VMware Cloud, and has been signalling at an increased focus on the open-source platform through its acquisitions of Heptio and indeed, Pivotal. Although things really kicked up a notch at its VMworld event in San Francisco, where the company announced ‘Project Pacific', which essentially fuses vSphere with Kubernetes. This has been positioned as the biggest update to vSphere in a very long time, and the company is backing it to have somewhat of a transformative effect on the industry and indeed VMware itself.
At VMware's European event in Barcelona (VMworld Europe), we sat down with VMware solutions marketing director Rory Choudhuri, to talk about why there is so much demand for Project Pacific and how the offering will position the containers vs VMs debate. In conjunction, Choudhuri also discusses the massively difficult question enterprises face today around when to update legacy applications and infrastructure and go all-in on modernisation. Finally, we talked to Choudhuri about where the industry is moving next, including both serverless architectures and distributed computing/edge.
VMware elaborated a little bit on it's 'Project Pacific' offering, which will allow VMs to run alongside containers in vSphere. There seems to be a fair amount of hype and customer interest for this project to the point where you've found it tough keeping up with demand, why do you think this is the case?
I think the reason there is so much customer interest is that it's answering a genuine need. The fact is, more and more organisations are writing applications for their own internal use, and the way those applications are written is changing. Fundamentally, that means developers need a new environment on which to write. If they don't get it from IT, they'll go external, and a shadow IT situation will arise, as developers seek answers to issues by themselves.
However, shadow IT never happens if a developer has an adequate environment in which to work and that's what we're looking to accomplish with Pacific. I would actually argue that the interest in the project is not hype, because we're not actually trying to hype it. We're just going in this direction because we've seen a market shift and we have the expertise - as well as a privileged position within the infrastructure of an organisation - to be able to provide the platform on which firms can respond to that shift.
Essentially, the reason customers are so desperate to get onto the beta is that their IT teams are hearing from developers that this is what they want to do.
I suppose there has been a thought in the industry that containers were more of an evolution beyond VMs, so how do you see them working alongside each other in project pacific?
I think containers and virtual machines do different things. If you go back three or four years, people were having a discussion around 'containers versus VMs', although I think we've all but put that discussion to bed now, because it's very much 'containers and VMs'. And the best place to run a container environment—by far—is in a virtualised environment.
If you think of a multifactorial application and the pieces that a 12 factor application needs, developers don't want to be writing those every single time. VMs are brilliant at things like high-availability, redundancy, and failover for example, so loading your containerised environment, orchestrated with Kubernetes in a mostly virtualised environment is going to give you the best of both worlds. Essentially, every customer I have talked to has gone down that path.
The beauty of Pacific is that it gives you all of that functionality, without necessarily needing to put it in a fully virtualised environment because you can run the containers as first-class citizens on the hypervisor. So, you're essentially using the hypervisor to provide that functionality.
So, thinking proportionately, do you see a relatively equal number of workloads running in containers vs Virtual Machines?
Well obviously we won't see an equal number of containers and VMs because by definition containers are smaller and more lightweight, so you'll generally see something like tens of thousands of containers vs thousands of VMs. In terms of what the ratio is going to be, it's very difficult to tell, but I think it will vary depending on the customer, the application, the user profiles, and the use case.
If I was going to make a prediction, I would say that, over time, we will see far more containers than VMs, but they will be doing different things.
Looking even further forward, some have positioned functions-as-a-service running in serverless architecture as the next step beyond containers. How disruptive do you think FaaS and serverless will be going forward?
There have been estimations that the number of applications we're going to see in the next five years is greater than the number we've seen in the last 40, or something along those lines. The proliferation of applications is something that I am seeing with every customer I talk to, as well as VMware ourselves.
In that sense, we also get questions around what percentage of applications organisations should run in the cloud, and whether the high-volume of applications means that data centres are dead. The truth is, when you're running 10X the amount of applications - even if a large proportion of them live in the cloud - you're going to need a heck of a lot more resources in the data centre. Taking that into account, my fundamental belief is that both will co-exist.
What I am seeing is a level of intelligence from organisations in terms of how they write those applications. It's a given that applications are going to be multifactorial, they're smaller and more component-based, and they interrelate an awful lot more. What is heartening is that developers and organisations are realising that they don't need to write the whole thing, they can use components from an internet content library or there are open source components that can be used if developers are confident with their security posture.
This results in new services being brought to market where developers have only had to write 15% of the code. They've grabbed some pieces from GitHub, other pieces from Bitnami, and pulled together a working application. Their intelligence is in writing the 15% and interrelating the different components to create a service that adds value to the customer.
I think we're going to see more of that and that's where we're going to see functions playing a part. There is a question over whether SMEs - for example - can actually do intelligent things with AI-based functionality, as they probably don't have the inhouse expertise to do so. However, if you can consume it as a service from cloud, either through SaaS or as a function that applies into your application or to your devices at the edge of your network, I think we're going to see that happen. So, I think FaaS will be less disruptive and more additive.
Shifting tack, one thing that organisations tend to struggle with is the question of when to modernise. Many firms are conscious that their legacy applications might be still be doing a good job, so there are often difficult decisions around whether they're worth replacing or modernising. What advice to you generally give to customers where this is concerned?
This is a really good question because it's something that comes up a lot. One of our approaches has been making professional services available to assist them and provide an evaluation.
We had a story recently that just boggled my mind, where the customer came in and said, 'we're going to the cloud and we'll be in the cloud by 2021'. Which is a great statement on paper, but obviously you need a logistical plan to back it up. So, we asked them how they were planning on moving the 4500 applications they had over to cloud, and they told us they were paying a third-party consultancy to move the applications for them. However, that process takes about 6 weeks per application... so you can do the maths on that.
Ultimately, they weren't going to be in the cloud by 2021 unless they did something radically different. The advice we gave to that customer is that they needed to understand their estate, and this is something that is prevalent to most customers. There are always going to a certain, often small percentage - say 10% - of applications that will suit a 'lift and shift' methodology. There's probably going to be another 10% where you can lift them, twiddle with them a bit, drop them, and they're going to work. Then you get to the fun bit.
You mentioned that the solid application that sits in the corner and does what it does, is going to carry on doing that forever. For example, there are financial institutions still running core processes on mainframes. Why is that the case? For one they are tremendously risk-averse, but the second part of it is that it just works. It's the same for large global airlines running parts of their websites on NT servers, which are obviously long-since out of support, but it works.
IT has always had that long tail of the new funky, shiny things that the industry is doing, and that is becoming an ever more important piece. But the rest of it will remain significant, because companies need to be able to keep the lights on. Looking at analyst data—and it does depend on who you talk to—generally 75 - 80% of IT spend is focused on keeping everything running, while 20 - 25% is spent on innovation.
So, taking all of that into consideration, our advice to customers is - be intelligent about what can move, how difficult it is to move it, and what benefits you're looking to achieve/why are you moving things? Perhaps 2-3 years ago the conversations centred around cloud-first in every sense. Now, I'm seeing much more intelligence being applied to the decision making, which is heartening.
To simplify, at the end of the day it's about understanding that cloud is not a destination, cloud is a strategy.
In the initial stages of the cloud boom, the major trend involved organisations moving their legacy, on-premise workloads into large, centralised public cloud environments. Although now we're starting to see a shift away from that purely centralised notion and starting to see - first of all hybrid - but also distributed/edge computing, driven by things like IoT adoption and content delivery networks. How disruptive will edge computing be to more current public cloud and hybrid models?
It depends on what you mean by disruptive, although it will almost certainly make changes. Prior to VMware I worked in the telco space for a while, so I know both the power of what telcos can do, as well as how unbelievably conservative they are. In that sense, I do think that they are a sleeping giant.
However, there are certain physical constraints that even 5G might struggle to provide a solution to at this point in time, as 5G needs a heck of a lot more masts. Just at a simple physical level, the real estate required to put up however many multiples of the numbers of masts that even 4G requires is a buildings, acquisitions, and land challenge that will be achieved easier in some countries and harder than others.
That aside, there are so many benefits that 5G will bring. We can predict the easy stuff, like the explosion of sensors and the replacement of home broadband, but there are multiple use cases that we still don't know about.