10 min read

Who owns your virtualization stack?

In a world full of open-source wrappers and orchestration layers, who actually owns the foundation your infrastructure runs on?
Who owns your virtualization stack?

For a long time, there wasn’t much to question. VMware was the virtualization market. You picked it, you paid for it (a lot), and you moved on. Alternatives existed, sure, but they rarely made it past the procurement spreadsheet. “Stack ownership”? Nobody asked.

Then Broadcom happened, and suddenly, everyone was asking.

Partners cut off, pricing multiplied, roadmap rewritten… you know the drill. What used to be a boring, predictable layer of your infrastructure became a very expensive source of risk. That’s when many teams realized just how deep their dependency really was.

At Vates, we didn’t just watch this unfold: we were right in the middle of it! That wave turned us from a niche open source project into a growing company of over 100 people, now powering thousands of deployments across all kinds of industries.

But more interesting than the growth was the pattern we started to see. Indeed, everyone was (or is) rebuilding. Everyone was re-evaluating. And everyone was asking the same question, though not always out loud:

There are many options out there. But very few of them take real responsibility for the full stack.

💎 Comparing what matters

Let’s be honest: most comparison tables online are just long lists of features, UI screenshots, or subjective takes on what “feels better.” And while interface polish and personal preference absolutely matter, they’re not what defines a platform’s viability in production.

So here, I’ll avoid judging tools based on UI taste or convenience or whether someone on Reddit said the interface was too old-school. I’m also not trying to match VMware feature-for-feature. Frankly, no one can. And most of the time, the product itself is just one part of the equation.

Instead, I’ve focused on what actually makes a difference when you're building or operating infrastructure at scale:

Who takes responsibility for the stack?
Who do you trust when something breaks?

So here’s a practical comparison based on what I believe matters most: orchestration, hypervisor maintenance, backup, security accountability, whether it’s turnkey, open source, and if 24/7 support is available.

Not exhaustive: just grounded.

🧭 Orchestration-first projects: batteries not included

Tools like CloudStack, OpenStack, and OpenNebula are great at what they’re designed for: giving you cloud-like features (multi-tenancy, APIs, automation, self-service portals). They’re the dashboard, the control panel, the interface layer.

But that’s all they are. And I know that very well: that’s exactly how we started at Vates. Xen Orchestra was originally just the orchestration layer on top of XenServer. It was never meant to replace or maintain the hypervisor underneath. It simply exposed what was already there.

So yes, these kinds of tools don’t include the core virtualization layer. They don’t handle KVM or Xen patching, security updates, hardware compatibility, or low-level performance. You bring your own hypervisor and you’d better have someone who knows how to maintain it.

In production, that separation really matters.

If no one is accountable for the platform layer, who’s going to fix it when it breaks?

Orchestration-first tools aren’t bad: they can be a great part of a stack. If you’re serious about production, you need to pair them with a solid, maintained hypervisor platform. Otherwise, it’s a bit like bolting a shiny dashboard onto a car with no engine.

🧰 Proxmox VE: integrated surface, outsourced foundation

Proxmox VE is a well-known open-source virtualization platform, especially popular in homelabs, education, and smaller IT environments (and there's very positives reasons for that!). It offers a web interface, clustering, some orchestration features that make it feel cohesive and approachable from the start. It's mostly Linux.

Under the hood, however, Proxmox is tightly coupled to Debian (FYI, I love Debian!). While they do package QEMU -with their own patchsets for performance or features- they do not maintain or control kernel patches, and there’s no dedicated security process for the virtualization layer mentioned in their official documentation. CVE tracking, patch backporting, or formal upstream coordination aren’t part of their public processes.

While a few patches have been submitted upstream (QEMU or the Linux kernel at least), there’s no sustained upstream development effort from Proxmox. In practice, the responsibility for low-level stability and security is offloaded to Debian.

Backup is also not included out of the box (but we'll see it's almost the norm for almost everyone), you’ll need to deploy and manage Proxmox Backup Server separately.

And when you look beyond the feature list, a few other things stand out:

  • There’s no centralized console (yet, it's still a WiP) to manage multiple clusters at once.
  • 24/7 support isn’t available: only business-hours, and it doesn’t extend to the hypervisor stack.
  • There’s no surrounding ecosystem: no 3rd party software partner program, no certified hardware integrations, no coordinated R&D with other vendors.
In a production environment, who is accountable for hypervisor security, updates, and patching?
With Proxmox, the answer is: not them.

Is it bad? In my opinion, not really: it just reflects that Proxmox is aimed at a different segment. If you have strong internal skills, limited scale, and want something open and accessible, it can work very well!

But for teams coming from VMware, accustomed to vendor accountability, full-stack support, and centralized infrastructure management, this is a fundamentally different equation.


Corrections from feedback of Proxmox users (I was sure this will come and fair is fair!):

  • Proxmox now publishes security advisories, which is a positive step. However, this process isn’t mentioned in their official documentation, and their own process still points to Debian for security on most of the packages. So while things are improving, responsibility for the hypervisor layer mostly remains upstream.
  • Proxmox VE includes basic backup capabilities out of the box (single-cluster, limited options), but for serious use, the official recommendation is to use Proxmox Backup Server (PBS). So while it's technically native, it's not really complete without PBS.
  • Yes, Proxmox developers have a few commits in QEMU and Linux, but these contributions are minimal relative to the scale of those projects. In contrast, companies like Nutanix or Vates have a much deeper involvement in their respective ecosystems, not just through code, but also in upstream release management, community leadership, event organization, and governance roles (like Vates within the Xen Project and Linux Foundation).

🏢 Microsoft: control… toward Azure

Microsoft is one of the few vendors that still fully controls its virtualization stack. With Hyper-V, they maintain the hypervisor, orchestration tools, and provide enterprise-grade support. That level of vertical integration is increasingly rare, and for many organizations, it’s reassuring.

There’s no integrated backup solution by default, but otherwise, Microsoft checks most boxes in terms of product maturity, documentation, and vendor accountability (well, in the usual realm on Microsoft support you can have for Windows, but I cannot deny accountability!)

However, over the past few years, Microsoft has made its direction very clear: the future is Azure. Even its current on-prem virtualization offer is branded Azure Stack HCI, a name that reflects where the long-term focus lies.

So while you still get a coherent stack today, it comes with an increasingly strong pull toward Microsoft's cloud ecosystem.

If you're aiming for long-term flexibility or strategic autonomy, it's worth asking: how neutral is the ground you're building on?

Hyper-V remains a solid technical option, but the strategy behind it is no longer about staying on-prem. That’s not a technical flaw: but it’s a trajectory worth keeping in mind.

Finally, needless to say that you won't check the "Open Source" box with them.

🧱 Nutanix: HCI focus, full support but still closed & $$$

Nutanix delivers a vertically integrated, enterprise-grade virtualization platform built on KVM, with strong vendor accountability: documented security processes, patch management, and defined support. That sets it apart from DIY or orchestration-only solutions. Backup is still done via 3rd party providers though.

But there are trade-offs:

  • High cost: pricing is at VMware levels or even higher, which may deter cost-conscious buyers.
  • HCI-centric: deeply tied to hyper-converged hardware, with limited flexibility outside approved configurations.
  • Closed ecosystem: no transparency into the source (outside KVM), limiting customization and auditability.
  • Heavy financial structure: Nutanix has raised over $300 M in VC rounds before its 2016 IPO, and even issued large debt and convertible bonds more recently . That financial leverage creates exposure. What happens if it gets acquired or merged into another Broadcom-like giant?
You get accountability… but you rely on a company whose capital-heavy model could make it a takeover target, potentially shifting your vendor underneath your feet.

🐧 IBM/Red Hat: the KVM paradox

Red Hat was the key driver behind KVM, helping make it a serious contender in the virtualization space. But in recent years, the company (now under IBM) has shifted its focus heavily toward OpenShift and container platforms.

KVM is still supported and maintained, but RedHat killed the main "turkney" virtualization platform, RHEV (full support ended on August 31, 2024).

But traditional virtualization now feels more like a legacy feature for IBM/Red Hat than a strategic priority. Classic VM users often find themselves navigating a product roadmap increasingly centered on Kubernetes, operators, and container orchestration.

There’s a growing disconnect between Red Hat’s historical role in virtualization and its current trajectory.

That doesn’t make it a bad option, but if you’re looking to replace VMware with something that feels familiar, VM-first, and easy to manage without a Kubernetes learning curve, Red Hat might not be the most comfortable fit. Especially when paired with high subscription costs and increasingly opinionated infrastructure design.

And almost "as usual", backup isn't baked-in.

🚀 Vates VMS: a full-stack open alternative

💡
I’m obviously biased here: I’m Olivier Lambert, CEO and co-founder of Vates, and the original creator of both XCP-ng and Xen Orchestra. But I’ve also lived this journey from the orchestration layer all the way down to hypervisor internals. I’ve done my best to stay as factual and fair as possible in this comparison. If you spot anything inaccurate, don’t hesitate to let me know in the comments, I’ll gladly correct it!

XCP-ng and Xen Orchestra form the core of the Vates VMS platform, designed to be a turnkey virtualization solution: hypervisor, orchestration, backup, security, and support: all maintained by a single vendor, under a fully open source model.

We started with orchestration: Xen Orchestra was originally built to manage XenServer remotely. But over the years, we’ve grown into maintaining the entire stack ourselves: from low-level hypervisor code (Xen + XAPI), up to backup and UI layers, and supporting it with a 24/7 team, regular updates, and upstream contributions.

Unlike many alternatives, Vates doesn’t just consume the hypervisor—we actively maintain and shape it. Our team contributes directly to the Xen Project, and we’ve made long-term investments in its evolution:

  • We’ve reported and helped fix multiple security issues: both in Xen itself and in guest-side virtualization drivers.
  • The last two Xen releases were managed by Oleksii, a core Xen maintainer working at Vates.
  • We're leading key efforts in the Xen Project’s roadmap, including the RISC-V port, AMD SEV support, and modern Q35 emulation.
  • We organized the recent Xen Winter Meetup, bringing together core contributors and new developers to collaborate in person.
  • We're also members of the Xen Project Advisory Board.
  • The XCP-ng project is officially hosted within the Xen Project, under the Linux Foundation umbrella.
  • And the current Xen Project community manager is part of our team at Vates.
In short, we don’t just use Xen: we help build its future.
🗒️
Credit where it’s due: Citrix did an excellent job building XenServer back in the day. But after they pushed out their best CEO in 2015, Mark Templeton, it became clear the project no longer had a place in Citrix’s long-term vision.
Thanks for all the fish though!

Outside the hypervisor, we:

  • Provide an integrated and feature-complete orchestration + backup solution
  • Run a dedicated security process, including CVE management and upstream fixes
  • Deliver a turnkey experience: not a mix-and-match kit
  • Are truly open source, with no paywalled features or open core limitations
  • Offer enterprise-grade support, including 24/7 coverage and professional services (migration consulting, technical account managers…)

That’s why we check all the boxes. But of course, we’re not perfect, and we’re not pretending to be VMware. So obviously, there are trade-offs!

The first is brand familiarity. We’re still often unknown outside of open source circles or sysadmin communities. If your procurement process runs on legacy checklists, you might not see "XCP-ng" listed, yet. But that’s changing quickly. We’re now trusted by major organizations like NASA, NOAA, or Harman, alongside many other Fortune 500 companies and more than 1000 customers all around the globe.

Second, while our documentation and resources are growing fast, we’re not at the same level as VMware or Nutanix in terms of white papers, webinars, or glossy enterprise events. That maturity takes time and we’re getting there.

Finally, no, we don’t claim to have 100% of VMware’s feature set. Frankly, no one does. VMware’s catalog is vast, and while we’re moving fast on core features, edge cases might still require a closer look. That’s part of the assessment you’ll need to make: whether our current scope fits your needs, and whether the trade-offs are worth the long-term control and transparency you gain.

And yes we know we’ve been behind on marketing and sales. That’s changing too. Not by becoming flashy, but by telling our story better, and making it easier for people to discover what we’ve already built.

📋 Comparing what matters

There’s no shortage of virtualization solutions out there ("finally!" I would say), but not all of them aim to solve the same problems. Some are orchestration layers. Some are tightly integrated, but closed. Others are open source, but incomplete.

If you're rebuilding after VMware, the real question isn’t “what’s open” or “what’s affordable”, it’s:

Who takes responsibility for the entire stack?

Below is a comparison based on what we believe really matters in production: orchestration, hypervisor maintenance, backup, security accountability, whether the solution is turnkey, truly open source, and if 24/7 support is available.

Solution VM Mgmt HV Vendor Backup Security Turnkey Open Src 24/7 Help
CloudStack ⚠️ ⚠️ ⚠️
OpenStack ⚠️ ⚠️ ⚠️
OpenNebula ⚠️ ⚠️ ⚠️
Proxmox ⚠️ ⚠️
Microsoft ⚠️
Broadcom ⚠️
Nutanix ⚠️
IBM/Red Hat ⚠️ ⚠️ ⚠️
Vates VMS

Legend:

  • ⚠️ means (depending on the context) available via 3rd party providers or the fact it's not integrated directly inside the solution, or simply not a first class citizen (like VMs in OpenShift, are 2nd class behind containers)
  • ❌ means it's simply not available or accountable

🌯 Wrapping up

This wasn’t meant to be a complete market analysis or some definitive "who wins" roundup. There are many more projects and products out there, and depending on your use case, some might fit just fine!

The ones I’ve listed here are simply the ones we get compared with most often, especially now. In practice, that means we’re mostly facing Nutanix and Microsoft, sometimes OpenShift. Rarely Proxmox, interestingly maybe because of our stronger presence in the US, or the nature of the deployments we usually work with.

Either way, the point isn’t to crown a winner. It’s to offer a different lens.

Because in a post-VMware world, the conversation is no longer just "what has a nice UI" or "what’s open source". It’s about who takes responsibility for the hypervisor, for security, for support, for the boring-but-crucial parts that make a platform actually hold up in production.

If this helped reframe how you look at the options, great. That was the goal. And if not, well… at least it wasn’t AI-generated filler, right?