[SOLVED] Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

Thanks a lot for the data. According to your numastat output, roughly 1/3 of the ~128GiB of QEMU process memory are assigned to NUMA node 0 and 2/3 to node 1. In my previous unsuccessful attempts to reproduce the freezes on real NUMA hardware, this was more of a 50:50 split. So I forced a 1/3 vs 2/3 split by allocating memory on NUMA node 0 (using numactl --preferred and stress-ng) before starting the Windows VM, waited until KSM kicked in (until "KSM sharing" showed ~25GiB), and after starting a couple of RDP sessions, I occasionally saw some ping response times of 2-5 seconds. I'll try to look into this further and post updates here.

P.S. will try to rerun the latest tests with mitigations enabled (but it's gonna be very painful if it gets worse)
Currently I doubt that mitigations play a huge role (KSM and NUMA balancing seem to be the bigger factors), so I don't think it would pay off to rerun the tests with mitigations enabled if this would disrupt your production traffic.
 
  • Like
Reactions: zeuxprox
Currently I doubt that mitigations play a huge role (KSM and NUMA balancing seem to be the bigger factors), so I don't think it would pay off to rerun the tests with mitigations enabled if this would disrupt your production traffic.
I was able to rerun test with mitigations=on and can confirm it has huge impact. With mitigations=on i see icmp packet loss and rdp disconnects rather than icmp reply time increase and micro freezes

However disabling numa balancing helps even with mitigations=on
 
Last edited:
I was able to rerun test with mitigations=on and can confirm it has huge impact. With mitigations=on i see icmp packet loss and rdp disconnects rather than icmp reply time increase and micro freezes

However disabling numa balancing helps even with mitigations=on
Okay, interesting, thanks for checking!

Just to confirm, the configuration for which you saw ICMP packet loss and RDP disconnects was mitigations=on, numa_balancing=1, KSM active?
 
  • Like
Reactions: zeuxprox
Okay, interesting, thanks for checking!

Just to confirm, the configuration for which you saw ICMP packet loss and RDP disconnects was mitigations=on, numa_balancing=1, KSM active?

Correct. Some data from that node below

mitigations=on, numa_balancing=1, KSM active, RDP disconnects / ICMP packets drop
Code:
root@pve-node-04348:~# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
node 0 size: 128819 MB
node 0 free: 58235 MB
node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 1 size: 128957 MB
node 1 free: 67002 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

root@pve-node-04348:~# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  72
  On-line CPU(s) list:   0-71
Vendor ID:               GenuineIntel
  BIOS Vendor ID:        Intel(R) Corporation
  Model name:            Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
    BIOS Model name:     Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz  CPU @ 2.3GHz
    BIOS CPU family:     179
    CPU family:          6
    Model:               79
    Thread(s) per core:  2
    Core(s) per socket:  18
    Socket(s):           2
    Stepping:            1
    CPU(s) scaling MHz:  79%
    CPU max MHz:         3600.0000
    CPU min MHz:         1200.0000
    BogoMIPS:            4594.72
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscal
                         l nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq d
                         tes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_tim
                         er aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs i
                         bpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdsee
                         d adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi md_clear flush_l1d
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):    
  L1d:                   1.1 MiB (36 instances)
  L1i:                   1.1 MiB (36 instances)
  L2:                    9 MiB (36 instances)
  L3:                    90 MiB (2 instances)
NUMA:                   
  NUMA node(s):          2
  NUMA node0 CPU(s):     0-17,36-53
  NUMA node1 CPU(s):     18-35,54-71
Vulnerabilities:        
  Gather data sampling:  Not affected
  Itlb multihit:         KVM: Mitigation: Split huge pages
  L1tf:                  Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
  Mds:                   Mitigation; Clear CPU buffers; SMT vulnerable
  Meltdown:              Mitigation; PTI
  Mmio stale data:       Mitigation; Clear CPU buffers; SMT vulnerable
  Retbleed:              Not affected
  Spec rstack overflow:  Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Mitigation; Clear CPU buffers; SMT vulnerable
root@pve-node-04348:~#
root@pve-node-04348:~# numastat -v $(cat /var/run/qemu-server/6902.pid)

Per-node process memory usage (in MBs) for PID 156106 (kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                        19.98            6.18           26.15
Stack                        0.70            0.00            0.70
Private                  60114.95        71004.45       131119.41
----------------  --------------- --------------- ---------------
Total                    60135.63        71010.63       131146.26
1702628918463.png
 
Last edited:
Thanks.
Currently I doubt that mitigations play a huge role (KSM and NUMA balancing seem to be the bigger factors), [...]
I should have been more precise here, what I actually meant was: It seems to me that the outcome does not differ between mitigations=on and mitigations=off. In other words, as soon as NUMA balancing is enabled and KSM is active on kernel >5.15, there are intermittent freezes no matter the value of mitigations. Does this align with your experiences?

I made some progress this week trying to reproduce the issue with a Linux (PVE) guest on NUMA hardware. If this reproducer turns out to be reliable, I'll use it to run a git-bisect on the kernel to find the commit with which the issues started appearing. I'll look into this again after the holidays and update this thread (and also post the reproducer here, it needs some cleaning up before I can do that though).
 
  • Like
Reactions: _gabriel
Thanks.

I should have been more precise here, what I actually meant was: It seems to me that the outcome does not differ between mitigations=on and mitigations=off. In other words, as soon as NUMA balancing is enabled and KSM is active on kernel >5.15, there are intermittent freezes no matter the value of mitigations. Does this align with your experiences?

The difference is that ICMP echo reply spikes become packet drops and micro freezes become RDS session disconnects/reconnects
In other words with mitigations=on the problem is more representative
 
Last edited:
  • Like
Reactions: fweber and _gabriel
Thanks.

I should have been more precise here, what I actually meant was: It seems to me that the outcome does not differ between mitigations=on and mitigations=off. In other words, as soon as NUMA balancing is enabled and KSM is active on kernel >5.15, there are intermittent freezes no matter the value of mitigations. Does this align with your experiences?

I made some progress this week trying to reproduce the issue with a Linux (PVE) guest on NUMA hardware. If this reproducer turns out to be reliable, I'll use it to run a git-bisect on the kernel to find the commit with which the issues started appearing. I'll look into this again after the holidays and update this thread (and also post the reproducer here, it needs some cleaning up before I can do that though).

I would say that KSM is out of scope. I'm facing this issue even with 1 VM per node (2 sockets board) with almost all cores assign to VM. On such nodes I disable KSM as service (it's useless on such setup)
 
I can very easy reproduce with Ubuntu server 22.04 VM and Nginx on two different PM nodes with different CPUs. I did try many things but so far only numa_balancing 0 helps
1703264796952.png
I can reproduce error from outside with command:
Code:
ab -n 2400 -c 1000 "https://pmtestserver.local/test/load"

php-fpm pool
Code:
pm = static
pm.max_children = 200
pm.start_servers = 60
pm.min_spare_servers = 70
pm.max_spare_servers = 99
pm.process_idle_timeout = 120s;
pm.max_requests = 200000

1st node:
Code:
CPU(s) 80 x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz (2 Sockets)
Kernel Version Linux 6.2.16-19-pve #1 SMP PREEMPT_DYNAMIC PMX 6.2.16-19 (2023-10-24T12:07Z)
2nd node:
Code:
CPU(s) 80 x Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz (2 Sockets)
Kernel Version Linux 6.5.11-7-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-7 (2023-12-05T09:44Z)

If you need any additional info please let me know.
 
  • Like
Reactions: fweber and zeuxprox
Just jumped in to give a big thank you to @fweber. I was searching for a solution for more than a year now, from when the 6.0 kernel version came out; if I remember well all 6.x kernels are affected.
I'm not a proxmox user, but this issue is a kernel issue, applies to all 6.x kernel versions in general.
I noticed a higher cpu usage in a mac os vm for the first minutes after booting it, it was stuttering a lot with all cpus reaching 100%, then I noticed also the issue in a windows 11 vm, less noticeable, but was laggy when playing games (with 28 cores inside the vm the cpu usage goes from about 4-5% with 5.15.x to 10% or more with 6.(6).x when idle).
As suggested, setting numa_balancing=disable helps a lot.
My setup is quite old, 2 sockets, each with xeon e5-2687w (v0).
 
Last edited:
Hi guy,

can you try "echo 0 > /sys/kernel/mm/ksm/merge_across_nodes".


I'm running it in production since multiple years, and I don't have any problem with my windows 2019 on pve 8.1 (epyc servers), with ksm && numa_balancing enabled.
 
  • Like
Reactions: zeuxprox
Hi guy,

can you try "echo 0 > /sys/kernel/mm/ksm/merge_across_nodes".


I'm running it in production since multiple years, and I don't have any problem with my windows 2019 on pve 8.1 (epyc servers), with ksm && numa_balancing enabled.

Thank for sharing. I will give it a try.
However I’m afraid this isn’t a key to the topic problem - I’m facing this issue even on hosts with only single VM (1 vm per host) and KSM disabled (it useless on such setup)
 
Hello All. Happy New Year.

Exactly 1 moth ago, I've started a new thread (here) with some issues described on this thread. My server is brand new (Supermicro Superserver), with 2 Intel(R) Xeon(R) Silver 4309Y CPU @ 2.80GHz CPU's, 256GB RAM and 2 RAID Controller :

Avago MegaRAID 9440-8i - for RAID 1

Avago MegaRAID 9341-4i - for 4x 8TB RAID 5

The difference is that I'm trying to install Windows Server 2022, but I can confirm that I do also experience stalling and ping delays.

So, would you recommend PVE 8.1, with kernel 5, os kernel 6 with the mitigations, numa_balancing and KSM settings identified here?

Thanks in advance.
 
Hello All. Happy New Year.

Exactly 1 moth ago, I've started a new thread (here) with some issues described on this thread. My server is brand new (Supermicro Superserver), with 2 Intel(R) Xeon(R) Silver 4309Y CPU @ 2.80GHz CPU's, 256GB RAM and 2 RAID Controller :

Avago MegaRAID 9440-8i - for RAID 1

Avago MegaRAID 9341-4i - for 4x 8TB RAID 5

The difference is that I'm trying to install Windows Server 2022, but I can confirm that I do also experience stalling and ping delays.

So, would you recommend PVE 8.1, with kernel 5, os kernel 6 with the mitigations, numa_balancing and KSM settings identified here?

Thanks in advance.

Well, I’m pretty confident that this issue is not directly related to guest OS.
In my case I have not seen such issue on PVE 7.x with kernels < 6.x
However disabling numa balancing do a trick (I’m not sure with what impact on performance)
If you do have an option to try the latest advice from @spirit (merge across nodes without disabling numa balancing) it will be very appreciated!
 
Last edited:
I put together a Linux guest reproducer that reliably triggers VM hangs on our (NUMA) test host in the presence of KSM and NUMA balancing. I used it for a kernel bisect and asked upstream on the KVM mailing list whether they have an idea how to debug this further [1] (also included the reproducer there). Note that it's not 100% certain the Linux reproducer triggers the same kind of hangs as the Windows VM hangs reported in this thread, but as the symptoms are very similar and disabling NUMA balancing seems to fix them, it does seem likely.

I would say that KSM is out of scope. I'm facing this issue even with 1 VM per node (2 sockets board) with almost all cores assign to VM. On such nodes I disable KSM as service (it's useless on such setup)
So far I haven't been able to trigger the hang if KSM is disabled. But maybe I've been missing some factor that can trigger the hang even without KSM. Could you post the output of lscpu and numactl -H, the configs of the two VMs and doublecheck that /sys/kernel/mm/ksm/pages_sharing is indeed 0?

If anyone is seeing these freezes with KSM disabled and NUMA balancing enabled: Could you run the following bpftrace script (using bpftrace script.bpf) and check whether it gives any output during the freezes? You'll need to install bpftrace first. The script tracks task_numa_work calls that take over 500ms.

Code:
kfunc:task_numa_work { @start[tid] = nsecs; }
kretfunc:task_numa_work /@start[tid]/ {
    $diff = nsecs - @start[tid];
    if ($diff > 500000000) { // 500ms
        time("[%s] ");
        printf("task_numa_work (tid=%d) took %d ms\n", tid, $diff / 1000000);
    }
    delete(@start[tid]);
}
Please also post the output of lscpu and numactl -H, and the VM config.

The difference is that I'm trying to install Windows Server 2022, but I can confirm that I do also experience stalling and ping delays.

So, would you recommend PVE 8.1, with kernel 5, os kernel 6 with the mitigations, numa_balancing and KSM settings identified here?

Thanks in advance.
As @Whatever mentioned: The issue discussed in this thread does not only affect Windows VMs, but it does look like they are more likely to trigger the hangs (on a hunch: maybe due to Windows-specific memory access patterns).

If your host is a NUMA host (you can post the output of lscpu and numactl -H), the stalls and ping delays you're describing might indeed be due to the issue discussed in this thread. But there is no guarantee, the freezes could also be due to other factors. If the bpftrace script above shows very long task_numa_work calls during the freezes, this would make it more likely that the NUMA balancer is involved, and that disabling NUMA balancing under PVE 8 would mitigate the freezes for now.

I wouldn't recommend running kernel 5.15 on PVE 8 as this is unsupported -- if you want to use kernel 5.15, you can try PVE 7 which will still be supported until July 2024.

[1] https://lore.kernel.org/kvm/832697b9-3652-422d-a019-8c0574a188ac@proxmox.com/T/#u
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_frequently_asked_questions_2
 
Hi, a quick update: It seems likely that the issue reported here matches one that is known upstream and actively being worked on [1]. There is a kernel bugreport from December [2] which very much sounds like the issue reported here. However, KSM was never enabled for the reporter of [2] -- this matches @Whatever's observation that the hangs can happen even with KSM disabled (I haven't been able to reproduce this yet, though). Regarding workarounds, disabling the NUMA balancer is also mentioned [3], another one seems to be disabling the TDP MMU [3] (but this is apparently a bit involved on kernel 6.5). I'll test the proposed fixes and post notable updates here.

[1] https://lore.kernel.org/kvm/832697b.../T/#madb7c53fa7ea71144fe8019c5cdd5cd7bf032238
[2] https://bugzilla.kernel.org/show_bug.cgi?id=218259
[3] https://forum.proxmox.com/threads/130727/page-8#post-619746
[4] https://bugzilla.kernel.org/show_bug.cgi?id=218259#c1
 
Hi, a quick update: It seems likely that the issue reported here matches one that is known upstream and actively being worked on [1]. There is a kernel bugreport from December [2] which very much sounds like the issue reported here. However, KSM was never enabled for the reporter of [2] -- this matches @Whatever's observation that the hangs can happen even with KSM disabled (I haven't been able to reproduce this yet, though). Regarding workarounds, disabling the NUMA balancer is also mentioned [3], another one seems to be disabling the TDP MMU [3] (but this is apparently a bit involved on kernel 6.5). I'll test the proposed fixes and post notable updates here.

[1] https://lore.kernel.org/kvm/832697b.../T/#madb7c53fa7ea71144fe8019c5cdd5cd7bf032238
[2] https://bugzilla.kernel.org/show_bug.cgi?id=218259
[3] https://forum.proxmox.com/threads/130727/page-8#post-619746
[4] https://bugzilla.kernel.org/show_bug.cgi?id=218259#c1
Thanks a lot!
Just some highlights that drove me crazy

> > This is likely/hopefully the same thing Yan encountered[1]. If you are
> able
> > to
> > test patches, the proposed fix[2] applies cleanly on v6.6 (note, I need to
> > post a
> > refreshed version of the series regardless), any feedback you can provide
> > would
> > be much appreciated.
> >
> > [1] https://lore.kernel.org/all/ZNnPF4W26ZbAyGto@yzhao56-desk.sh.intel.com
> > [2] https://lore.kernel.org/all/20230825020733.2849862-1-seanjc@google.com
>
> I admit that I don't understand most of what's written in the those threads.

LOL, no worries, sometimes none of us understand what's written either ;-)

....

> > KVM changes aside, I highly recommend evaluating whether or not NUMA
> > autobalancing is a net positive for your environment. The interactions
> > between
> > autobalancing and KVM are often less than stellar, and disabling
> > autobalancing
> > is sometimes a completely legitimate option/solution.
>
> I'll have to evaluate multiple options for my production environment.
> Patching+Building the kernel myself would only be a last resort. And it will
> probably take a while until Debian ships a patch for the issue. So maybe
> disable the NUMA balancing, or perhaps try to pin a VM's memory+cpu to a
> single
> NUMA node.

if disabling numa balancing is completely legitimate solution shouldn't it be disabled by default?
 
Hi, a quick update: It seems likely that the issue reported here matches one that is known upstream and actively being worked on [1]. There is a kernel bugreport from December [2] which very much sounds like the issue reported here. [...] I'll test the proposed fixes and post notable updates here.

Wow, this is indeed great news @fweber! As the initial poster of this issue back in Jun 2023, I was of course still monitoring what might come up since with the help of @Whatever I was able to solve our initial issues with the ICMP spikes and unresponsive Windows VMs we had here after the migration from PVE7 to PVE8 and kernel 6.2. In fact, since I disabled KSM and enabled mitigations=off for all our PVE nodes we haven't seen any more of these spikes or unresponsive windows vms. However, since we are about to migrate to PVE 8.1 with kernel 6.5 I was of course curious to check first if that issue might have been kernel 6.2 related only.

So great that you picked that topic up yourself and seem to have identified the root cause, namely numa balancing to be blamed here. Reading through your kernel mailing list post and the replies by kernel maintainers makes me confident that there might be potentially an upcoming PVE kernel update with these fixes applied soon so that the issue I initially posted here might be finally fixed. So please keep up updated here tightly and if the proposed patches might solve the issue please make sure to push them into a soon to be released PVE kernel update for PVE8.1 and kernel 6.5+.
 
  • Like
Reactions: fweber and Whatever
Just contributing to what you guys already said, seeing this issue on a fully-upgraded Proxmox 8 with pve-qemu-kvm 8.1.2-6 and a single Debian 12 guest with numa enabled. The guest is very laggy and has a permanent high CPU steal, although there are no other guests and the host itself is idle.

When the host is booted with numa_balancing=disable and the guest also has the numa flag disabled, everything returns to normal and runs smoothly.
mitigations=off made no difference, on either host or guest.
Proxmox 7 was problem-free with numa enabled on the same host.

Holding off on migrating any hosts to v8 for now, some guests require larger amounts of RAM and numa.
 
Last edited:
I tested the patches that were proposed upstream, see [1] for details. They do seem to fix the freezes for the reproducer (if applied to the correct base commit, see [1]), so it is likely they would also fix the freezes reported in this thread. However, the patches are quite new and have not been applied upstream, and one of the patches may be a bit difficult to backport. So we'll have to figure out what's the best way to get a fix into the PVE kernel. For now I've sent an RFC patch with more details to our PVE development mailing list [2].

[1] https://lore.kernel.org/kvm/832697b.../T/#md0dd7dbfe4d395e34ddda722455ba4c4fba6511a
[2] https://lists.proxmox.com/pipermail/pve-devel/2024-January/061399.html
 
if disabling numa balancing is completely legitimate solution shouldn't it be disabled by default?
Good question, currently I don't know. But FWIW, as far as I understand the proposed upstream patch [0], the freezes are not directly caused by the NUMA balancer, but more of an unfortunate complex interaction between the kernel scheduler logic, KSM, the NUMA balancer and KVM.
Wow, this is indeed great news @fweber! As the initial poster of this issue back in Jun 2023, I was of course still monitoring what might come up since with the help of @Whatever I was able to solve our initial issues with the ICMP spikes and unresponsive Windows VMs we had here after the migration from PVE7 to PVE8 and kernel 6.2. In fact, since I disabled KSM and enabled mitigations=off for all our PVE nodes we haven't seen any more of these spikes or unresponsive windows vms. However, since we are about to migrate to PVE 8.1 with kernel 6.5 I was of course curious to check first if that issue might have been kernel 6.2 related only.
Good to hear that disabling KSM and mitigations still works for you! For posterity, I do want to mention that in general, disabling mitigations is not advisable (due to its security implications). Currently it seems like disabling NUMA balancing is a better workaround (until there is a permanent fix), as it would allow you to re-enable mitigations and (if wanted) KSM.
Just contributing to what you guys already said, seeing this issue on a fully-upgraded Proxmox 8 with pve-qemu-kvm 8.1.2-6 and a single Debian 12 guest with numa enabled. The guest is very laggy and has a permanent high CPU steal, although there are no other guests and the host itself is idle.
Out of curiosity, can you post the output of lscpu?
When the host is booted with numa_balancing=disable and the guest also has the numa flag disabled, everything returns to normal and runs smoothly.
mitigations=off made no difference, on either host or guest.
Proxmox 7 was problem-free with numa enabled on the same host.

Holding off on migrating any hosts to v8 for now, some guests require larger amounts of RAM and numa.
Note that one needs to differentiate between NUMA emulation for VMs [1], which is enabled on a per-VM basis (the "Enable NUMA" checkbox in the GUI), and the NUMA balancer, which is a kernel task running on the host [2] and thus affects all VMs running on that host. In my tests with the reproducer [3], it looks like a VM can freeze regardless of whether NUMA emulation is enabled or disabled for that VM, as long as the NUMA balancer is enabled on the (NUMA) host. So in other words, I don't think the value of the "Enable NUMA" checkbox for a VM makes any difference for the freezes.

[0] https://lore.kernel.org/all/20240110012045.505046-1-seanjc@google.com/
[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_numa
[2] https://doc.opensuse.org/documentation/leap/tuning/html/book-tuning/cha-tuning-numactl.html
[3] https://lore.kernel.org/kvm/832697b9-3652-422d-a019-8c0574a188ac@proxmox.com/T/#u
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!