Skip to main content

Hypervisors

A hypervisor is a piece of software, firmware, or hardware that creates and manages virtual machines (VMs) on a physical server. It acts as an abstraction layer between the hardware and guest operating systems, enabling multiple isolated environments to coexist on a single machine.

The term "hypervisor" derives from "supervisor": where an operating system supervises applications, the hypervisor supervises operating systems. It is also referred to as a VMM (Virtual Machine Monitor).

Role in virtualization

The hypervisor performs several essential functions:

  • Resource partitioning: it divides the physical server's CPU, memory, storage, and network among virtual machines.
  • Isolation: each VM operates in a sandboxed environment. A crash or security vulnerability in one VM does not affect the others.
  • Hardware abstraction: VMs see standardized virtualized hardware, regardless of the underlying physical hardware. This allows migrating a VM from one server to another without reconfiguration.
  • Lifecycle management: creation, startup, shutdown, snapshots, and live migration of virtual machines.

Without a hypervisor, each application would require a dedicated physical server, with an average utilization rate of 10-15%. Virtualization brings utilization up to 60-80%, reducing the number of servers needed and therefore infrastructure costs.

Type 1 vs Type 2: the two categories of hypervisors

Type 1 hypervisor (bare-metal)

A Type 1 hypervisor is installed directly on the server hardware, without an intermediate host operating system. It has direct access to the CPU, memory, and peripherals.

Characteristics:

  • Near-native performance (1-5% overhead depending on workload)
  • Direct access to processor virtualization instructions (Intel VT-x, AMD-V)
  • Suited for production and mission-critical environments
  • Requires dedicated server hardware

Examples:

  • KVM (Kernel-based Virtual Machine): a module integrated into the Linux kernel since 2007. Each VM is a standard Linux process, allowing it to benefit from the kernel's scheduler and memory management.
  • VMware ESXi: a proprietary hypervisor from VMware (Broadcom since 2023). Long considered the enterprise standard, it requires per-CPU licensing.
  • Microsoft Hyper-V: integrated into Windows Server. Also available as a standalone version (Hyper-V Server), though Microsoft discontinued this free edition in 2021.
  • Xen: an open-source hypervisor used notably by AWS for its early EC2 instance generations. Less prevalent today, having largely been replaced by KVM.

Type 2 hypervisor (hosted)

A Type 2 hypervisor is installed as an application on top of an existing operating system (Windows, macOS, Linux). It depends on the host OS to access the hardware.

Characteristics:

  • Easier to install (like any regular software)
  • Higher overhead (the host OS consumes resources)
  • Suited for development, testing, and learning
  • Not appropriate for large-scale production

Examples:

  • VirtualBox (Oracle): free, cross-platform, widely used for local development.
  • VMware Workstation (Windows/Linux) and VMware Fusion (macOS): paid solutions with advanced features (chained snapshots, complex virtual networks).
  • Parallels Desktop (macOS): optimized for running Windows on Mac, including on Apple Silicon chips.
  • QEMU standalone (without KVM): operates in pure emulation mode without hardware acceleration. Useful for emulating different architectures (e.g., running ARM on x86).

Type 1 vs Type 2 summary

CriterionType 1 (bare-metal)Type 2 (hosted)
InstallationDirectly on hardwareOn an existing OS
PerformanceNear-native (< 5% overhead)Notable overhead (5-15%)
Primary useProduction, data centersDevelopment, testing
ExamplesKVM, ESXi, Hyper-V, XenVirtualBox, VMware Workstation
SecurityReduced attack surfaceDepends on host OS security

In production and cloud environments, only Type 1 hypervisors are used. All major cloud providers rely on KVM (Google Cloud, AWS since 2017 with Nitro, Oracle Cloud) or proprietary variants derived from KVM.

Comparison of Type 1 hypervisors

The table below compares the four most widely deployed Type 1 hypervisors in 2024-2025.

CriterionKVMProxmox VEVMware ESXiMicrosoft Hyper-V
LicenseOpen source (GPL v2)Open source (AGPL v3)Proprietary (Broadcom)Included with Windows Server
CostFreeFree (optional paid support)Per-CPU license, starting at ~$5,000/year per socketIncluded with Windows Server license (~$1,000-6,000/year)
Technical foundationLinux kernel moduleKVM + QEMU + LXC + web UIProprietary VMkernelMicrosoft hypervisor
CPU overhead< 2%< 2% (KVM underneath)2-5%5-10%
Management interfaceCLI (virsh, virt-manager)Built-in web interfacevSphere Client (licensed separately)Hyper-V Manager, SCVMM
ContainersNo (via Podman/Docker in VM)Yes (native LXC)NoNo (via Windows Containers)
Distributed storageVia Ceph, GlusterFS, etc.Built-in CephvSAN (additional license)Storage Spaces Direct (S2D)
Live migrationYesYesYes (vMotion, additional license)Yes
APIlibvirt (REST/SDK)Full REST APIvSphere API (proprietary SDK)PowerShell, WMI
CommunityVery large (Linux kernel)Active (~100,000 installations)Declining since Broadcom acquisitionTied to Microsoft ecosystem
Vendor lock-inNoneNone (standard data formats)Strong (VMDK formats, vSphere required)Moderate (tied to Windows Server)

Key concerns about VMware ESXi

Since Broadcom's acquisition of VMware in late 2023, the VMware ecosystem has undergone major changes:

  • End of the free tier: the free standalone ESXi version has been unavailable since February 2024.
  • Shift to subscriptions: perpetual licenses were replaced by annual subscriptions, with reported price increases ranging from 200% to 1,200% depending on the configuration.
  • Forced bundling: the product lineup was reduced to two offerings (VCF and vSphere Foundation), requiring some customers to pay for features they do not use.
  • Partner program overhaul: many long-standing partners were dropped from the new program.

These changes have accelerated the migration of many organizations to open-source solutions like Proxmox VE and KVM.

Why Bunker chose KVM and Proxmox VE

Bunker uses Proxmox VE as its management layer, built on KVM as the hypervisor. This choice rests on five pillars.

No vendor lock-in

With KVM and Proxmox, customer data is stored in open, standardized formats (QCOW2, raw). A customer can at any time:

  • Export disk images and reimport them on any other KVM hypervisor
  • Migrate to another cloud provider using KVM
  • Build their own on-premise infrastructure with Proxmox

This reversibility is a core principle of digital sovereignty: customers retain ownership of their data and workloads, with no dependency on a single vendor.

Software sovereignty

KVM is part of the Linux kernel, the most audited and transparent open-source project in the world. The source code is public, verifiable, and free of backdoors. No foreign company can unilaterally change the terms of use, raise prices, or discontinue the product.

Proxmox VE is developed by Proxmox Server Solutions GmbH, an Austrian company (EU). The code is licensed under AGPL v3, ensuring that all modifications remain open source.

Native performance

Because KVM is integrated directly into the Linux kernel, it benefits from the same optimizations as the kernel itself:

  • Native use of hardware virtualization extensions (Intel VT-x/VT-d, AMD-V/Vi)
  • CPU overhead measured below 2% on real-world workloads
  • Direct access to I/O passthrough features (SR-IOV) for network and GPU performance
  • Memory management with KSM (Kernel Same-page Merging) and huge pages

Simplified management with Proxmox

Proxmox VE provides an administration layer on top of KVM that makes day-to-day operations accessible:

  • Full web interface for creating, configuring, and monitoring VMs
  • Integrated Ceph distributed storage management
  • Native LXC container support for lightweight workloads
  • Built-in backup system (Proxmox Backup Server)
  • Clustering and high availability without third-party software

Community and longevity

KVM is maintained by hundreds of contributors, including engineers from Red Hat, Intel, Google, IBM, and AMD. It is used in production by the largest cloud providers in the world. Its longevity does not depend on a single company.

Proxmox has over 100,000 installations worldwide and an active community of contributors and users.

Use cases: which hypervisor to choose?

The right hypervisor depends on the context. Here are recommendations by use case.

Development and testing

Recommendation: VirtualBox or VMware Workstation (Type 2)

For a developer who needs to test their application on different operating systems or reproduce a production environment locally, a Type 2 hypervisor is the simplest option. VirtualBox is free and sufficient for most needs. VMware Workstation offers better performance and advanced features for more demanding scenarios.

Web production and business applications

Recommendation: KVM/Proxmox VE or KVM-based public cloud

For hosting applications in production, a Type 1 hypervisor is essential. KVM with Proxmox offers the best features-to-cost ratio: native performance, high availability, Ceph distributed storage, all without licensing fees. This is the choice made by Bunker.

Enterprise private cloud

Recommendation: Proxmox VE (VMware replacement)

For organizations managing their own data centers, Proxmox VE is now the most mature alternative to VMware vSphere. The web interface, cluster management, built-in Ceph storage, and optional commercial support make it a proven production solution. Migration from VMware is facilitated by VMDK import tools.

Edge computing and remote sites

Recommendation: KVM/Proxmox VE in a small cluster

For edge deployments (factories, retail locations, remote offices), Proxmox can operate in 2-to-3-node clusters with replicated local storage. Its lightweight footprint and absence of licensing make it well-suited for large-scale distributed deployments.

Microsoft-only environments

Recommendation: Hyper-V

If the entire infrastructure relies on Windows Server and Active Directory, Hyper-V is the most coherent choice. It is included with Windows Server licensing and integrates natively with System Center and Azure Arc.

Bunker: hypervisor architecture

Hypervisors are the physical servers that run your virtual machines on Bunker.

Overview

Bunker uses Proxmox VE as its primary hypervisor. Proxmox is an open-source virtualization solution based on:

  • KVM (Kernel-based Virtual Machine) for hardware virtualization
  • QEMU for hardware emulation
  • LXC for Linux containers

Architecture

Abstraction

The Control Plane uses an abstraction to communicate with hypervisors:

hypervisor_connector (abstract trait)
└── hypervisor_connector_proxmox (Proxmox implementation)
└── hypervisor_connector_resolver (dynamic resolution)

This architecture allows:

  • Adding new hypervisor types without modifying existing code
  • Testing integration with mocks
  • Switching between hypervisors as needed

Hypervisor management

  • Registration: Add a new Proxmox cluster to the Control Plane
  • Detachment: Remove a hypervisor from the Control Plane
  • Monitoring: Monitor hypervisor health status

Instance operations

The hypervisor executes operations requested by the Control Plane:

OperationDescription
CloneCreate an instance from a template
StartStart an instance
StopStop an instance
DeleteDelete an instance

High availability

Proxmox clusters are configured for high availability:

  • Ceph shared storage: VM disks are replicated
  • Live migration: VMs can be moved between nodes without interruption
  • Fencing: Automatic isolation of failing nodes

Frequently asked questions

What is the difference between KVM and Proxmox VE?

KVM is the virtualization component built into the Linux kernel. It provides the ability to create and run virtual machines, but its native interface is command-line based (virsh). Proxmox VE is a complete Linux distribution that integrates KVM, QEMU, LXC, and a web-based management interface. Put simply: KVM is the engine, Proxmox is the dashboard.

Is it possible to migrate VMware VMs to KVM/Proxmox?

Yes. Disk images in VMDK format (VMware) can be converted to QCOW2 format (KVM) using the qemu-img convert tool. Proxmox also includes an import wizard that simplifies this process. VM metadata (CPU, memory, network) must be reconfigured manually, but the data on disk is preserved in full.

Is KVM suitable for demanding workloads (databases, AI)?

Yes. KVM supports I/O passthrough (SR-IOV) for direct access to network cards and GPUs, which is essential for AI workloads and high-performance computing. Benchmarks show performance indistinguishable from bare-metal for CPU-intensive workloads, and less than 3% overhead for disk I/O with virtio.

Why do all major cloud providers use KVM?

Google Cloud has used KVM from the start. AWS migrated from Xen to KVM (via its Nitro hypervisor) in 2017. Oracle Cloud Infrastructure uses KVM. The reason is technical: KVM is part of the Linux kernel, which means it automatically benefits from every kernel improvement (scheduler, memory management, hardware drivers, security patches). No proprietary hypervisor can match the development pace of the Linux kernel, which receives contributions from thousands of engineers every year.

Next steps