Kernel BUG CPU Soft lockup.(VM/HOST freezes)

tribumx

New Member
May 11, 2022
20
3
3
Hey,

Im running Proxmox with this versions:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-1
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-7
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

on this Machine:
  • AMD Ryzen™ 7 3700X
  • 64 GB DDR4 ECC
  • 2 x 12 TB HDD
I have issues with the kernel, because after a while the host gets unavailable to ssh. I need to hard reset everytime it happens.

This is my syslog:
https://pastebin.com/5EcrKb5z

I already thought maybe its because i overcomitted the cpu resources. Then i changed all the VMs cpu cores to an amount that does not exceed the host cpu core limit. But after 30-40 minutes the same error again. I've also read on bugzilla.kernel.org, that it is maybe the AMD Processor? But the funny thing is, after i installed a Windows VM this error appears, so it must be the Win VM... but i dont know, what to change?

Anyone got a clue?

Thanks!
 
So the system was fine until the Windows VM was started? Do have passthrough the CPU as host?
Yes, i never use kvm64, always cpu passthrough. I also tried to enable hv-tlbflush in the CPU settings.
Still get the same issue after a while. Maybe its the kernel, because since i dont overcommit, the Win VM keeps working for 1-2 hours. Before i´ve done that, after 10-20 mins everything screwed up
 
Yes, i never use kvm64, always cpu passthrough. I also tried to enable hv-tlbflush in the CPU settings.
You mean cpu type host, don't you? The word passthrough does not make sense here. You still have the abstraction here and only directly pass all cpu flags through to your guest. Have you tried not doing this?
 
You mean cpu type host, don't you? The word passthrough does not make sense here. You still have the abstraction here and only directly pass all cpu flags through to your guest. Have you tried not doing this?
I changed it now to kvm64. Since 3 hours no bug happened. Will keep you updated thanks!

20 minutes after this post next cpu got stuck bug
 
Last edited:
I even changed it to 2 cpus, so proxmox has 4 cpus to handle all the stuff, still get the bug...
 
Hello,

we experience smiliar problems, it seems that live migration triggers some weird clock issues. for example I migrated a cos7 vm from epyc 1st gen node, to epyc 3rd gen node, vm still running, when I migrate this vm back to the epyc 1st gen the vm crashes with 100% cpu and I can only reset the vm. I tried this a few times back and forth, then suddenly other vm's are getting this issues too. in one case a windows server the time jumped from 1.6.2022 17:00 to 2.6.2022 20:37, the windows vm recovered, but a few linux vm's crashed similar to my test vm. Also it seems the issues are only happening on amd epyc cpu's, we have 4 older nodes with Xeon E5 V4 and V3. Afaik there was no crash with this hardware since the upgrade to 7.2.

Anyone else seeing something like this?

Regards

proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
 
Hello,

we experience smiliar problems, it seems that live migration triggers some weird clock issues. for example I migrated a cos7 vm from epyc 1st gen node, to epyc 3rd gen node, vm still running, when I migrate this vm back to the epyc 1st gen the vm crashes with 100% cpu and I can only reset the vm. I tried this a few times back and forth, then suddenly other vm's are getting this issues too. in one case a windows server the time jumped from 1.6.2022 17:00 to 2.6.2022 20:37, the windows vm recovered, but a few linux vm's crashed similar to my test vm. Also it seems the issues are only happening on amd epyc cpu's, we have 4 older nodes with Xeon E5 V4 and V3. Afaik there was no crash with this hardware since the upgrade to 7.2.

Anyone else seeing something like this?

Regards

proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
I think it’s neither the cpu or the ram. Always when there is high I/O the cpu bug occurs, so it’s the same as yours because migration is high I/O too.
But how to fix that? Use separate hdds? Another file system?
 
I am not so sure if it is really I/O, storage for these VM's come from an external CEPH Cluster. So yes there is network I/O but there is anyways because of the CEPH traffic, and @Night there is always backup and nothing happens there.
 
I am not so sure if it is really I/O, storage for these VM's come from an external CEPH Cluster. So yes there is network I/O but there is anyways because of the CEPH traffic, and @Night there is always backup and nothing happens there.

Hmm i've read my syslog and i think its the filesystem in my case. Look after the Calltrace, its definitely something with the filesystem. So i need to change my filesystem or what? Will try it out
Code:
Jun 01 07:41:27 lucy kernel: watchdog: BUG: soft lockup - CPU#5 stuck for 2094s! [kvm:2438]
Jun 01 07:41:27 lucy kernel: Modules linked in: tcp_diag inet_diag veth ebtable_filter ebtables ip_set ip6table_raw ip6table_filter ip6_tables nf_tables iptable_raw xt_multiport iptable_filter xt_MASQUERADE xt_nat xt_tcpudp iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 bpfilter bonding tls softdog nfnetlink_log nfnetlink xfs ast drm_vram_helper drm_ttm_helper intel_rapl_msr intel_rapl_common amd64_edac edac_mce_amd ttm kvm_amd drm_kms_helper cec kvm rc_core irqbypass fb_sys_fops syscopyarea k10temp crct10dif_pclmul sysfillrect ccp sysimgblt ghash_clmulni_intel aesni_intel mac_hid crypto_simd cryptd wmi_bmof pcspkr rapl zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi sunrpc drm ip_tables x_tables autofs4 btrfs blake2b_generic xor zstd_compress raid6_pq libcrc32c simplefb crc32_pclmul xhci_pci xhci_pci_renesas i2c_piix4 igb
Jun 01 07:41:27 lucy kernel:  i2c_algo_bit dca ahci xhci_hcd libahci wmi gpio_amdpt gpio_generic
Jun 01 07:41:27 lucy kernel: CPU: 5 PID: 2438 Comm: kvm Tainted: P           O L    5.15.35-1-pve #1
Jun 01 07:41:27 lucy kernel: Hardware name: Hetzner /B565D4-V1L, BIOS L0.23 02/23/2022
Jun 01 07:41:27 lucy kernel: RIP: 0010:rwsem_down_write_slowpath+0x1d2/0x4d0
Jun 01 07:41:27 lucy kernel: Code: 03 00 00 48 83 c4 60 4c 89 e8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 c6 45 c0 01 4c 89 e7 c6 07 00 0f 1f 40 00 fb 66 0f 1f 44 00 00 <45> 85 f6 74 1e 48 8b 03 a9 00 00 02 00 75 07 48 8b 03 a8 04 74 0d
Jun 01 07:41:27 lucy kernel: RSP: 0018:ffffb40cc7c83848 EFLAGS: 00000283
Jun 01 07:41:27 lucy kernel: RAX: 0000000000000006 RBX: ffff8f4e55b96300 RCX: ffffb40cc7ad3c38
Jun 01 07:41:27 lucy kernel: RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff8f4e4b51f10c
Jun 01 07:41:27 lucy kernel: RBP: ffffb40cc7c838d0 R08: 0000000000000000 R09: ffff8f4e55b96300
Jun 01 07:41:27 lucy kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff8f4e4b51f10c
Jun 01 07:41:27 lucy kernel: R13: ffff8f4e4b51f0f8 R14: 0000000000000000 R15: ffffb40cc7c83868
Jun 01 07:41:27 lucy kernel: FS:  00007f215cc0f1c0(0000) GS:ffff8f5d3eb40000(0000) knlGS:0000000000000000
Jun 01 07:41:27 lucy kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 01 07:41:27 lucy kernel: CR2: 00007f30f1121000 CR3: 0000000119316000 CR4: 0000000000350ee0
Jun 01 07:41:27 lucy kernel: Call Trace:
Jun 01 07:41:27 lucy kernel:  <TASK>
Jun 01 07:41:27 lucy kernel:  down_write+0x43/0x50
Jun 01 07:41:27 lucy kernel:  xfs_ilock+0x70/0xf0 [xfs]
Jun 01 07:41:27 lucy kernel:  xfs_vn_update_time+0xc9/0x1d0 [xfs]
Jun 01 07:41:27 lucy kernel:  file_update_time+0xea/0x140
Jun 01 07:41:27 lucy kernel:  file_modified+0x27/0x30
Jun 01 07:41:27 lucy kernel:  xfs_file_write_checks+0x244/0x2c0 [xfs]
Jun 01 07:41:27 lucy kernel:  xfs_file_dio_write_aligned+0x67/0x130 [xfs]
Jun 01 07:41:27 lucy kernel:  xfs_file_write_iter+0x10d/0x1b0 [xfs]
Jun 01 07:41:27 lucy kernel:  ? security_file_permission+0x2f/0x60
Jun 01 07:41:27 lucy kernel:  io_write+0xfe/0x320
Jun 01 07:41:27 lucy kernel:  io_issue_sqe+0x3e9/0x1fb0
Jun 01 07:41:27 lucy kernel:  ? __pollwait+0xd0/0xd0
Jun 01 07:41:27 lucy kernel:  ? __pollwait+0xd0/0xd0
Jun 01 07:41:27 lucy kernel:  __io_queue_sqe+0x35/0x310
Jun 01 07:41:27 lucy kernel:  ? fget+0x2a/0x30
Jun 01 07:41:27 lucy kernel:  io_submit_sqes+0xfb5/0x1b50
Jun 01 07:41:27 lucy kernel:  ? __pollwait+0xd0/0xd0
Jun 01 07:41:27 lucy kernel:  ? __fget_files+0x86/0xc0
Jun 01 07:41:27 lucy kernel:  __do_sys_io_uring_enter+0x520/0x9a0
Jun 01 07:41:27 lucy kernel:  ? __do_sys_io_uring_enter+0x520/0x9a0
Jun 01 07:41:27 lucy kernel:  __x64_sys_io_uring_enter+0x29/0x30
Jun 01 07:41:27 lucy kernel:  do_syscall_64+0x5c/0xc0
Jun 01 07:41:27 lucy kernel:  ? exit_to_user_mode_prepare+0x37/0x1b0
Jun 01 07:41:27 lucy kernel:  ? syscall_exit_to_user_mode+0x27/0x50
Jun 01 07:41:27 lucy kernel:  ? __x64_sys_read+0x1a/0x20
Jun 01 07:41:27 lucy kernel:  ? do_syscall_64+0x69/0xc0
Jun 01 07:41:27 lucy kernel:  ? syscall_exit_to_user_mode+0x27/0x50
Jun 01 07:41:27 lucy kernel:  ? do_syscall_64+0x69/0xc0
Jun 01 07:41:27 lucy kernel:  ? syscall_exit_to_user_mode+0x27/0x50
Jun 01 07:41:27 lucy kernel:  ? __x64_sys_write+0x1a/0x20
Jun 01 07:41:27 lucy kernel:  ? do_syscall_64+0x69/0xc0
Jun 01 07:41:27 lucy kernel:  ? do_syscall_64+0x69/0xc0
Jun 01 07:41:27 lucy kernel:  ? do_syscall_64+0x69/0xc0
Jun 01 07:41:27 lucy kernel:  ? do_syscall_64+0x69/0xc0
Jun 01 07:41:27 lucy kernel:  ? asm_common_interrupt+0x8/0x40
Jun 01 07:41:27 lucy kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Jun 01 07:41:27 lucy kernel: RIP: 0033:0x7f21675e29b9
Jun 01 07:41:27 lucy kernel: Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a7 54 0c 00 f7 d8 64 89 01 48
Jun 01 07:41:27 lucy kernel: RSP: 002b:00007ffccdab09f8 EFLAGS: 00000212 ORIG_RAX: 00000000000001aa
Jun 01 07:41:27 lucy kernel: RAX: ffffffffffffffda RBX: 00007f1b3b0b0640 RCX: 00007f21675e29b9
Jun 01 07:41:27 lucy kernel: RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000011
Jun 01 07:41:27 lucy kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
Jun 01 07:41:27 lucy kernel: R10: 0000000000000000 R11: 0000000000000212 R12: 0000559b09eb4e68
Jun 01 07:41:27 lucy kernel: R13: 0000559b09eb4f20 R14: 0000559b09eb4e60 R15: 0000000000000001
Jun 01 07:41:27 lucy kernel:  </TASK>

How does your syslog file look like if you get the bug?
 
Last edited:
Hello,

we experience smiliar problems, it seems that live migration triggers some weird clock issues. for example I migrated a cos7 vm from epyc 1st gen node, to epyc 3rd gen node, vm still running, when I migrate this vm back to the epyc 1st gen the vm crashes with 100% cpu and I can only reset the vm. I tried this a few times back and forth, then suddenly other vm's are getting this issues too. in one case a windows server the time jumped from 1.6.2022 17:00 to 2.6.2022 20:37, the windows vm recovered, but a few linux vm's crashed similar to my test vm. Also it seems the issues are only happening on amd epyc cpu's, we have 4 older nodes with Xeon E5 V4 and V3. Afaik there was no crash with this hardware since the upgrade to 7.2.

Anyone else seeing something like this?

Regards

proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

I have the same issue with live migrations. Since PVE 7.2 live migrations hangs my Linux VM:s when migrating from an Epyc gen3 "Milan" node, regardless of the target node. I don't see it when migrating from an Epyc gen1.

It never happened before 7.2 and kernel 5.15.
If I go back to the 5.13 kernel I can live migrate without issues from this node.

I have only a few times seen any logs of this, but when I have it has been soft lock related.
 
I have the same issue with live migrations. Since PVE 7.2 live migrations hangs my Linux VM:s when migrating from an Epyc gen3 "Milan" node, regardless of the target node. I don't see it when migrating from an Epyc gen1.

It never happened before 7.2 and kernel 5.15.
If I go back to the 5.13 kernel I can live migrate without issues from this node.

I have only a few times seen any logs of this, but when I have it has been soft lock related.
Which 5.13 kernel are you exactly using? Cant rollback cause my proxmox installation is too new and need to install manually

Using now
Kernel Version

Linux 5.13.19-6-pve #1 SMP PVE 5.13.19-15 (Tue, 29 Mar 2022 15:59:50 +0200)

Seems to work flawlessly:)

Since 6 hours no bugs occurred, it was the kernel just do rollback or install older kernel

SOLVED! Working with the mentioned kernel!
 
Last edited:
  • Like
Reactions: Bengt Nolin
We had similar issues with both 5.15.30-2-pve and 5.15.35-1-pve on 3 different servers (Dell PowerEdge R450 and R440).

Also doing proxmox-boot-tool kernel pin 5.13.19-6-pve for now on them.
 
We had similar issues with both 5.15.30-2-pve and 5.15.35-1-pve on 3 different servers (Dell PowerEdge R450 and R440).

Also doing proxmox-boot-tool kernel pin 5.13.19-6-pve for now on them.
Do you have AMD Processor in your servers? I think it’s an amd related issue
 
I have now downgraded and pinned the kernel to 5.13.19-6-pve, since then no more issues with live migration.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!