KVM, PCI passthrough and SR-IOV

Kernel75

Renowned Member
Jul 22, 2014
11
0
66
Rome, Italy
Good morning to everyone!
Even if this looks like my very first message, I've been using (and I'm still using) ProxMox since version 1.5 and it is still my preferred virtualization platform.

I'm trying to set up a sort of very private cloud dedicated to HPC; I know that HPC and virtualization are still very very experimental but I find working with virtual machines much easier than setting up a cluster management system as WareWulf or Perceus also because I need to run several distros on this cluster.
The cluster is composed by three computational nodes: the last acquired node is a 4x AMD Opteron 6272 (64 cores), 256GB RAM, Mellanox ConnectX-3 InfiniBand card, and some disks; the previous two nodes are 4x AMD Opteron 6172 (48 cores per node), 256GB RAM, Mellanox ConnectX-2 InfiniBand card.
As of now, I'm only doing tests on the 64 cores node because the other nodes are in production.

I successfully setup ProxMox 3.2 on the node and I succeeded with "normal" PCI passthrough and I tested InifiniBand connection from the VM (CentOS 6.5 with latest Mellanox OFED drivers).
Unfortunately, using standard PCI passthrough, only one VM can access the InfiniBand card; however, Mellanox ConnectX-3 cards support SR-IOV and I succeeded in activating it so that I can see 16 Virtual cards:

Code:
04:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]
04:00.1 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.2 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.3 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.4 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.5 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.6 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.7 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.1 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.2 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.3 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.4 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.5 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.6 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.7 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:02.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]

Now I'm really stuck on starting the virtual machine; setting the option

Code:
hostpci0: 04:00.1

in /etc/pve/nodes/proxmox/qemu-server/100.conf

does not work and raises this error:
Code:
kvm: -device pci-assign,host=04:00.1,id=hostpci0,bus=pci.0,addr=0x10: Failed to assign device "hostpci0" : Invalid argument
kvm: -device pci-assign,host=04:00.1,id=hostpci0,bus=pci.0,addr=0x10: Device initialization failed.
kvm: -device pci-assign,host=04:00.1,id=hostpci0,bus=pci.0,addr=0x10: Device 'kvm-pci-assign' could not be initialized
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name TestHPC001 -smp 'sockets=2,cores=16' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k it -m 51200 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'pci-assign,host=04:00.1,id=hostpci0,bus=pci.0,addr=0x10' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'file=/mnt/pve/ISO_su_Microserver/template/iso/CentOS-6.5-x86_64-bin-DVD1.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/mnt/pve/NFS_su_Microserver/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=C6:04:03:41:7F:A1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: got timeout

I suppose that pci-assign only works with physical devices because it works like a charm if I specify 04:00.0 (the whole card) as PCI address.
I'm looking at vfio-pci, but I'm not able to find much documentation and almost everything is libvirt related.

Does anybody know how to proceed?

Thanks in advance for any help.

Ciao,
Roberto
 
I've just tried with the updated instructions and with kernel 3.10 but it does not work on my side.
If I point to the physical device 04:00.0, the system can only start one single VM accessing the Mellanox card.
If I point to a virtual function (VF) e.g. 04:00.1, the VM cannot start complaining about a missing iommu_group.
If I try to point to the whole device (04:00) just for testing a get errors, too.

Maybe I need to specify some more options due to the fact that SR-IOV is involved and it is not a "simple" PCI Passthrough.

Just for completeness I'm posting my pveversion details below:

Code:
root@hpc001:~# pveversion -v
proxmox-ve-2.6.32: 3.2-126 (running kernel: 3.10.0-3-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-14
qemu-server: 3.1-28
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-7
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1


Thanks,
Roberto
 
you have old packages - you need to update ALL from pvetest to test this, not only the kernel.
 
Hi, Tom
thank you for pointing this out.

I've upgraded (I hope, I am more a CentOS user than a Debian one, so I really do not know if I issued the correct commands!) everything.
pveversion output is now:

Code:
root@hpc001:~# pveversion -vproxmox-ve-2.6.32: 3.2-132 (running kernel: 3.10.0-3-pve)
pve-manager: 3.2-18 (running version: 3.2-18/e157399a)
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-14
qemu-server: 3.1-28
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-7
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-1
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

After rebooting the host I tried to start the VM using this configuration:
Code:
bootdisk: virtio0
cores: 16
ide2: none,media=cdrom
memory: 51200
name: TestHPC001
net0: virtio=8A:A1:E4:50:EF:21,bridge=vmbr0
ostype: l26
smbios1: uuid=45c6032e-3283-4483-98d7-c41fc2993a0b
sockets: 2
virtio0: NFS_su_Microserver:100/vm-100-disk-1.qcow2,size=80G
machine: q35
hostpci0: 04:00.1,pcie=1,driver=vfio

where 04:00.1 is the first virtual function of Mellanox card but the result is an error:
Code:
root@hpc001:~# qm start 100
Cannot open iommu_group: No such file or directory

If I change hostpci0: 04.00.1 to hostpci0: 04.00.0 (the physical card) only one VM can start; when I try to start another VM using the very same setup (same PCI address) the error is:
Code:
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio: error opening /dev/vfio/10: Device or resource busykvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio: failed to get group 10
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=pci.0,addr=0x10: Device initialization failed.
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=pci.0,addr=0x10: Device 'vfio-pci' could not be initialized

If I revert to what I think should be the correct setup (hostpci0: 04.00.1) the error becomes:
Code:
no pci device info for device '04:00.1'
and the device disappeared from lspci output, too.

Any suggestion?

Thanks,
Roberto
 
to revert to your old config, you should bootup your old kernel too... pve-kernel-2.6.32-29-pve. (choose in GRUB, or change default in /etc/default/grub and apply)
 
to revert to your old config, you should bootup your old kernel too... pve-kernel-2.6.32-29-pve. (choose in GRUB, or change default in /etc/default/grub and apply)

Probably I misused the word "revert": I was only referring to hostpci0 parameter not to the whole system .

What I mean is: if I need to connect a VF "exported" by Mellanox card when SR-IOV is enabled, I suppose that I need to specify the VF address (e.g. 04:00.1 or 04:01.2) and not the address of the PF (04:00.0). In my various attempts I was able to boot the VM only when using the PF address (04:00.0) which, IMHO, is not the right config and, in fact, I'm able to start only one single VM while I would like to start more than one.

Starting a single VM connected to the Mellanox card via PCI Passthrough is something I can achieve without "living on the edge" with pvetest repository and, in the end, is something I can accept but I'd like the possibility to start more VMs on the same node with different configurations and not one at a time.

I was wondering if this problem could be caused by a different driver: from Mellanox web site I can download their OFED package that includes several tools and that may be more up to date than the stock ofed available in main kernel tree.
Unfortunately the packages are available for some distributions but Proxmox is a mix of RHEL kernel and Debian and their standard scripts for installation do not work.

Thanks,
Roberto
 
What I'm little bit wondering, do you need to assign the physical interface to the VM or maybe would be enough to create for each ethX interface a bridge and assign that bridges to your kvms using virtio, E1000 or whatever..?
 
What I'm little bit wondering, do you need to assign the physical interface to the VM or maybe would be enough to create for each ethX interface a bridge and assign that bridges to your kvms using virtio, E1000 or whatever..?

I need to provide InfiniBand connectivity (40Gbps) to virtual machines running on top of Proxmox.
 
I need to provide InfiniBand connectivity (40Gbps) to virtual machines running on top of Proxmox.

Hi , not sure it's related, but melanox doc said:

"
http://www.mellanox.com/related-doc...uning_Guide_for_Mellanox_Network_Adapters.pdf

To configure the “iommu” to “pass-thru” option :


Intel_iommu=on iommu=pt
"

(as you use amd cpus, ignore intel_iommu, but maybe iommu=pt could be intestering).

The main point is that you should have separate iommu group by virtual functions, to be able to assign them individually.
 
If I change hostpci0: 04.00.1 to hostpci0: 04.00.0 (the physical card) only one VM can start; when I try to start another VM using the very same setup (same PCI address) the error is:

What do you mean? You want to passthrough and use one device on multiple VM at the same time? That doesn't work...
 
What do you mean? You want to passthrough and use one device on multiple VM at the same time? That doesn't work...

I know that it's not possible to pass a PCI device to multiple VMs: as far as I can understand this is the reason for the existence of SR-IOV; the same PCI device (usually referred as PF, Physical Function) is exposed to the system as a group of VF (Virtual Function); each VF has its own address and I should be able to assign a single VF to a single VM; Mellanox card exposes 16 VFs, so I should be able to map these VFs to 16 VMs.

Did I misunderstand SR-IOV functionality completely?
 
Mellanox card exposes 16 VFs, so I should be able to map these VFs to 16 VMs.

Did I misunderstand SR-IOV functionality completely?


Yes, that's correct.

I think the problem could come from iommu module configuration, or maybe incompatiblity with motherboard. (sometimes bios upgrade can help too).

what is your server/motherboard model ?
 
Last edited:
what is your server/motherboard model ?

Here is my dmidecode output:
Code:
Base Board Information
        Manufacturer: Supermicro
        Product Name: H8QG6
        Version: 1234567890
        Serial Number: WM139S600452
        Asset Tag: 1234567890
        Features:
                Board is a hosting board
                Board is replaceable
        Location In Chassis: 1234567890
        Chassis Handle: 0x0003
        Type: Motherboard
        Contained Object Handles: 0

I'm trying to install Mellanox OFED package on Proxmox, but it's not simple due to the nature of Proxmox itself: Debian based with RHEL kernel; the install script get confused at every step.

My next attempt will be to install CentOS 6.5 as the host OS, oVirt and try to achieve the same results on a platform that should be fully supported from Mellanox.
 
Here is my dmidecode output:
Code:
Base Board Information
        Manufacturer: Supermicro
        Product Name: H8QG6
        Version: 1234567890
        Serial Number: WM139S600452
        Asset Tag: 1234567890
        Features:
                Board is a hosting board
                Board is replaceable
        Location In Chassis: 1234567890
        Chassis Handle: 0x0003
        Type: Motherboard
        Contained Object Handles: 0
what is the bios version ?
sr-iov is supported since bios 2.0b


>>I'm trying to install Mellanox OFED package on Proxmox, but it's not simple due to the nature of Proxmox itself: Debian based with RHEL kernel; the install script get confused at every step.
>>
>>My next attempt will be to install CentOS 6.5 as the host OS, oVirt and try to achieve the same results on a platform that should be fully supported from Mellanox.
[/QUOTE]

It could be great if you could make a report if it's works on centos !
 
After banging my head on CentOS + oVirt, I have to say that it does not work and I am not able to understand why.
I can add the PCI device to the VM using girt-manager but the VM is not going to boot.
If I try to boot the VM using the web interface, the PCI device is automatically removed and the VM boots but, from my point of view, is almost useless.

qemu fails to boot the VM with the following error:
Code:
Failed to assign device "hostdev0" : Permission denied
qemu-kvm: -device pci-assign,host=04:00.5,id=hostdev0,configfd=28,bus=pci.0,addr=0x8: Device 'pci-assign' could not be initialized

I got the same error using the PF, the physical card.
It is quite strange considering that SELinux is disabled, firewall is disabled and the problem is still there even running qemu as root.

But this forum is for Proxmox, not for oVirt so let's move on!

I have two options now:
  1. pass the card to a single VM that uses all of the host resources
  2. try to recompile the driver from Mellanox

The first option is not so strange in HPC environment: I was trying to use virtualization instead of a cluster management suite because I feel more comfortable with VMs instead of booting nodes from LAN, configure chroots environments, etc.

So it will be my choice for production...
...but the cluster is not going to be in production during August!

Can somebody give me directions to get kernel sources and, if needed, recompile the kernel?
I downloaded the 3.10 from git repository, but it is quite far from the usual kernel tarball coming from kernel.org ...

I would like to share my opinion regarding oVirt and Proxmox: setting oVirt, connect it to a shared storage (simple NFS export), connect to the VM console is a challenge compared to Proxmox. I'm working on a Mac, and it is really annoying the fact that to open the console I have to download a config file, open it with a text reader (less), copy the password and quickly paste it in the VNC client. With Proxmox is as simple as click!

Connecting the NFS export is difficult, too: oVirt requires special owner and group setting or will refuse to connect. Luckily a simple query on Google can resolve the problem but I really do not understand why something that is easily done with other products (XenServer and Proxmox for sure) must be complicated using oVirt.

Uploading an ISO image is not straightforward, too! You have to use a special script or, as I did, discover the full path on the NFS share and copy the file by hand.

Thumbs up for Proxmox!
 
Hi,
mellanox doc about enabling sr-iov is here
http://community.mellanox.com/docs/DOC-1317

I find a doc to install ofed on debian squeeze (maybe it's the same for wheezy)
http://www.cbp.ens-lyon.fr/emmanuel.quemener/dokuwiki/doku.php?id=ofed4squeeze


Kernel75;97690[COLOR=#333333 said:
Can somebody give me directions to get kernel sources and, if needed, recompile the kernel?[/COLOR]
I downloaded the 3.10 from git repository, but it is quite far from the usual kernel tarball coming from kernel.org ...

The kernel from proxmox git, is the kernel from redhat.
to generate the .deb, you just need to do "#make".

If you want to add some custom patches, drivers, you need to edit "Makefile".
They are some examples for intel e1000 and broadcom bnx2 drivers.
 
Hi,
mellanox doc about enabling sr-iov is here
http://community.mellanox.com/docs/DOC-1317

That's exactly what I have done. It works as expected on CentOS 6.5 and it works on Proxmox, too.
The "trick" to get the VFs exposed is in the options passed when loading mlx4_core module.


I find a doc to install ofed on debian squeeze (maybe it's the same for wheezy)
http://www.cbp.ens-lyon.fr/emmanuel.quemener/dokuwiki/doku.php?id=ofed4squeeze
It is written in french and it will be difficult for me, but commands are in english! I'll try to use Google translator even if the document refers to a different version of OFED.



The kernel from proxmox git, is the kernel from redhat.
to generate the .deb, you just need to do "#make".

If you want to add some custom patches, drivers, you need to edit "Makefile".
They are some examples for intel e1000 and broadcom bnx2 drivers.

Thanks for the directions. I will not be in the office for a couple of days, but I will try as soon as possible.
 
After some more testing on Mellanox card, I decided to try with a different card because I noticed that the integrated Intel network card is SR-IOV capable, too.
I simply removed and reloaded the igb module passing max_vfs=7 as modprobe option and the virtual functions appeared.

Of course you have to do this using a console or IPMI because the network is going to be shut down while the module is reloading.

I decided to test SR-IOV and KVM with Intel card and it worked perfectly; I can passthrough the same card to two different VMs running at the same time simply following the howto directions step by step.

Here is the output for lspci (trimmed down to the interesting lines only):
Code:
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
02:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
02:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
02:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
02:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
02:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
02:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
04:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]
04:00.1 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.2 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.3 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.4 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.5 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.6 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:00.7 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.1 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.2 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.3 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.4 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.5 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.6 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:01.7 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]
04:02.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]

It looks similar to the Mellanox output when SR-IOV is enabled.

What's the difference between Mellanox and Intel?
I do not know the very deep differences but I noticed that, using Intel card, I can find iommu_group definition in every VF, e.g.:

for the physical device:
Code:
root@hpc001:~# readlink /sys/bus/pci/devices/0000\:02\:00.0/iommu_group
../../../../kernel/iommu_groups/11

for one virtual function:
Code:
root@hpc001:~# readlink /sys/bus/pci/devices/0000\:02\:10.0/iommu_group
../../../../kernel/iommu_groups/13

for another VF:
Code:
root@hpc001:~# readlink /sys/bus/pci/devices/0000\:02\:10.4/iommu_group
../../../../kernel/iommu_groups/15

while for Mellanox I can found iommu_group assigment only for the PF:
Code:
root@hpc001:~# readlink /sys/bus/pci/devices/0000\:04\:00.0/iommu_group
../../../../kernel/iommu_groups/10

root@hpc001:~# readlink /sys/bus/pci/devices/0000\:04\:00.1/iommu_group
root@hpc001:~# readlink /sys/bus/pci/devices/0000\:04\:00.2/iommu_group
root@hpc001:~# readlink /sys/bus/pci/devices/0000\:04\:00.3/iommu_group

the latest three readlink return an empty reply because the link does not exists at all and this explains the "Cannot open iommu_group: No such file or directory" I got when trying to start a VM with a VF connected.

In the end: KVM, PCI passthrough and SR-IOV works fine on Proxmox when using Intel network card (at least the VMs can boot and I can find the card in the VM lspci output).

For some (still unknown) reason vfio does not populate the iommu_group in the VF when using Mellanox card.
I'm going to reboot the server with CentOS (where Mellanox OFED package is installed) to see if I can get evidences of this behaviour on a fully supported platform; if yes, I'll post my question on Mellanox community forum and report the answer here.

Best,
Roberto
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!