qmeventd shutdown VM

Veeh

Well-Known Member
Jul 2, 2017
70
13
48
38
Hi,

I'm having a bit of a problem since I set up my 2 node cluster.
THe issue happened first with 6.4, then I upgraded to 7.0 both node, no problem with the upgrade but the issue is still there.

I have a main node with a couple of VM and I added a second node a couple of week ago.
The 2nd node is not always ON, I have a cron that set corum vote to 1 when the second node is offline.
Since I did that, from time to time qmeventd shutdown my VM and I can't figure out why.

I have VM 100, 101, 102, 103, and 105 running on this node. As you can see in the log below all of them are being shutdown by qmeventd

Code:
Jul  9 03:01:15 stmx QEMU[8103]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:01:16 stmx qmeventd[1662521]: Starting cleanup for 103
Jul  9 03:01:16 stmx qmeventd[1662521]: Finished cleanup for 103
Jul  9 03:02:03 stmx QEMU[4733]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:02:04 stmx qmeventd[1664620]: Starting cleanup for 105
Jul  9 03:02:04 stmx qmeventd[1664620]: Finished cleanup for 105
Jul  9 03:03:04 stmx QEMU[4432]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:03:05 stmx qmeventd[1668332]: Starting cleanup for 102
Jul  9 03:03:05 stmx qmeventd[1668332]: Finished cleanup for 102
Jul  9 03:04:05 stmx QEMU[4183]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:04:06 stmx qmeventd[1670989]: Starting cleanup for 100
Jul  9 03:04:06 stmx qmeventd[1670989]: Finished cleanup for 100
Jul  9 03:05:04 stmx QEMU[4089]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:05:05 stmx qmeventd[1673387]: Starting cleanup for 101
Jul  9 03:05:05 stmx qmeventd[1673387]: Finished cleanup for 101

This is not happening every day. I had the issue twice already. But every time this is happening around 3/4am.
I don't have anything scheduled on the node, no cron no reboot no backup.

I have checked dmesg, and I don't have out of memory logs. In the graph, I have about 1 fourth of the total ram available on the host before the issue happened.
I have a lot of these:
Code:
[    0.052608] PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfedfffff]
[    0.052609] PM: hibernation: Registered nosave memory: [mem 0xfee00000-0xfee00fff]
[    0.052609] PM: hibernation: Registered nosave memory: [mem 0xfee01000-0xfeffffff]
[    0.052610] PM: hibernation: Registered nosave memory: [mem 0xff000000-0xffffffff]

I don't know if this is related.

I'll set up a cron to collect syslog and dmesg during that time.
Any ideas?
thanks

Veeh


Full Syslog for that time frame:
Code:
Jul  9 03:01:00 stmx systemd[1]: Starting Proxmox VE replication runner...
Jul  9 03:01:01 stmx systemd[1]: pvesr.service: Succeeded.
Jul  9 03:01:01 stmx systemd[1]: Finished Proxmox VE replication runner.
Jul  9 03:01:15 stmx QEMU[8103]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:01:15 stmx kernel: [46796.252312]  zd272: p1 p2 p3
Jul  9 03:01:15 stmx kernel: [46796.282063]  zd352: p1
Jul  9 03:01:15 stmx systemd[1]: Stopping LVM event activation on device 230:353...
Jul  9 03:01:15 stmx systemd[1]: Stopping LVM event activation on device 230:275...
Jul  9 03:01:15 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:01:15 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:01:15 stmx lvm[1662467]:   pvscan[1662467] /dev/zd272p3 excluded by filters: device is rejected by filter config.
Jul  9 03:01:15 stmx lvm[1662442]:   pvscan[1662442] /dev/zd352p1 excluded by filters: device is rejected by filter config.
Jul  9 03:01:15 stmx systemd[1]: lvm2-pvscan@230:275.service: Succeeded.
Jul  9 03:01:15 stmx systemd[1]: Stopped LVM event activation on device 230:275.
Jul  9 03:01:15 stmx systemd[1]: lvm2-pvscan@230:353.service: Succeeded.
Jul  9 03:01:15 stmx systemd[1]: Stopped LVM event activation on device 230:353.
Jul  9 03:01:15 stmx kernel: [46796.451319] vmbr1: port 5(tap103i0) entered disabled state
Jul  9 03:01:15 stmx systemd[1]: 103.scope: Succeeded.
Jul  9 03:01:15 stmx systemd[1]: 103.scope: Consumed 3h 57min 23.624s CPU time.
Jul  9 03:01:16 stmx qmeventd[1662521]: Starting cleanup for 103
Jul  9 03:01:16 stmx qmeventd[1662521]: Finished cleanup for 103
Jul  9 03:01:27 stmx pmxcfs[3649]: [dcdb] notice: data verification successful
Jul  9 03:01:41 stmx smartd[3346]: Device: /dev/sdd [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 66 to 60
Jul  9 03:02:00 stmx systemd[1]: Starting Proxmox VE replication runner...
Jul  9 03:02:01 stmx systemd[1]: pvesr.service: Succeeded.
Jul  9 03:02:01 stmx systemd[1]: Finished Proxmox VE replication runner.
Jul  9 03:02:03 stmx QEMU[4733]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:02:03 stmx kernel: [46844.904203]  zd368: p1 p2 p3
Jul  9 03:02:03 stmx kernel: [46844.906585]  zd304: p1
Jul  9 03:02:03 stmx systemd[1]: Stopping LVM event activation on device 230:305...
Jul  9 03:02:03 stmx systemd[1]: Stopping LVM event activation on device 230:371...
Jul  9 03:02:03 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:02:03 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:02:03 stmx lvm[1664551]:   pvscan[1664551] /dev/zd304p1 excluded by filters: device is rejected by filter config.
Jul  9 03:02:03 stmx lvm[1664586]:   pvscan[1664586] /dev/zd368p3 excluded by filters: device is rejected by filter config.
Jul  9 03:02:03 stmx systemd[1]: lvm2-pvscan@230:305.service: Succeeded.
Jul  9 03:02:03 stmx systemd[1]: Stopped LVM event activation on device 230:305.
Jul  9 03:02:03 stmx systemd[1]: lvm2-pvscan@230:371.service: Succeeded.
Jul  9 03:02:03 stmx systemd[1]: Stopped LVM event activation on device 230:371.
Jul  9 03:02:04 stmx kernel: [46845.060500] vmbr1: port 4(tap105i0) entered disabled state
Jul  9 03:02:04 stmx systemd[1]: 105.scope: Succeeded.
Jul  9 03:02:04 stmx systemd[1]: 105.scope: Consumed 14min 9.979s CPU time.
Jul  9 03:02:04 stmx qmeventd[1664620]: Starting cleanup for 105
Jul  9 03:02:04 stmx qmeventd[1664620]: Finished cleanup for 105
Jul  9 03:03:00 stmx systemd[1]: Starting Proxmox VE replication runner...
Jul  9 03:03:01 stmx systemd[1]: pvesr.service: Succeeded.
Jul  9 03:03:01 stmx systemd[1]: Finished Proxmox VE replication runner.
Jul  9 03:03:04 stmx QEMU[4432]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:03:05 stmx kernel: [46906.045003]  zd384: p1 p2 p3
Jul  9 03:03:05 stmx systemd[1]: Stopping LVM event activation on device 230:387...
Jul  9 03:03:05 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:03:05 stmx lvm[1668267]:   pvscan[1668267] /dev/zd384p3 excluded by filters: device is rejected by filter config.
Jul  9 03:03:05 stmx systemd[1]: lvm2-pvscan@230:387.service: Succeeded.
Jul  9 03:03:05 stmx systemd[1]: Stopped LVM event activation on device 230:387.
Jul  9 03:03:05 stmx kernel: [46906.203878] vmbr1: port 3(tap102i0) entered disabled state
Jul  9 03:03:05 stmx systemd[1]: 102.scope: Succeeded.
Jul  9 03:03:05 stmx systemd[1]: 102.scope: Consumed 15min 26.002s CPU time.
Jul  9 03:03:05 stmx qmeventd[1668332]: Starting cleanup for 102
Jul  9 03:03:05 stmx qmeventd[1668332]: Finished cleanup for 102
Jul  9 03:04:00 stmx systemd[1]: Starting Proxmox VE replication runner...
Jul  9 03:04:01 stmx systemd[1]: pvesr.service: Succeeded.
Jul  9 03:04:01 stmx systemd[1]: Finished Proxmox VE replication runner.
Jul  9 03:04:05 stmx QEMU[4183]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:04:05 stmx kernel: [46966.778875]  zd240: p1 p2 p3
Jul  9 03:04:05 stmx systemd[1]: Stopping LVM event activation on device 230:243...
Jul  9 03:04:05 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:04:05 stmx lvm[1670928]:   pvscan[1670928] /dev/zd240p3 excluded by filters: device is rejected by filter config.
Jul  9 03:04:05 stmx systemd[1]: lvm2-pvscan@230:243.service: Succeeded.
Jul  9 03:04:05 stmx systemd[1]: Stopped LVM event activation on device 230:243.
Jul  9 03:04:05 stmx kernel: [46966.947179] vmbr1: port 2(tap100i0) entered disabled state
Jul  9 03:04:06 stmx kernel: [46967.069822] device enp1s0 left promiscuous mode
Jul  9 03:04:06 stmx systemd[1]: 100.scope: Succeeded.
Jul  9 03:04:06 stmx systemd[1]: 100.scope: Consumed 14min 1.119s CPU time.
Jul  9 03:04:06 stmx qmeventd[1670989]: Starting cleanup for 100
Jul  9 03:04:06 stmx qmeventd[1670989]: Finished cleanup for 100
Jul  9 03:04:16 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:04:16 stmx pvestatd[3785]: status update time (5.068 seconds)
Jul  9 03:04:26 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:04:26 stmx pvestatd[3785]: status update time (5.060 seconds)
Jul  9 03:04:33 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:04:42 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:04:51 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:05:00 stmx systemd[1]: Starting Proxmox VE replication runner...
Jul  9 03:05:01 stmx systemd[1]: pvesr.service: Succeeded.
Jul  9 03:05:01 stmx systemd[1]: Finished Proxmox VE replication runner.
Jul  9 03:05:03 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:05:04 stmx QEMU[4089]: kvm: terminating on signal 15 from pid 3347 (/usr/sbin/qmeventd)
Jul  9 03:05:04 stmx kernel: [47025.405412]  zd160: p1 p2 p3
Jul  9 03:05:04 stmx systemd[1]: Stopping LVM event activation on device 230:163...
Jul  9 03:05:04 stmx systemd[1]: Requested transaction contradicts existing jobs: Resource deadlock avoided
Jul  9 03:05:04 stmx lvm[1673335]:   pvscan[1673335] /dev/zd160p3 excluded by filters: device is rejected by filter config.
Jul  9 03:05:04 stmx kernel: [47025.560887] vmbr0: port 2(tap101i0) entered disabled state
Jul  9 03:05:04 stmx systemd[1]: lvm2-pvscan@230:163.service: Succeeded.
Jul  9 03:05:04 stmx systemd[1]: Stopped LVM event activation on device 230:163.
Jul  9 03:05:04 stmx kernel: [47025.616930] device enp0s25 left promiscuous mode
Jul  9 03:05:04 stmx systemd[1]: 101.scope: Succeeded.
Jul  9 03:05:04 stmx systemd[1]: 101.scope: Consumed 9min 25.226s CPU time.
Jul  9 03:05:05 stmx qmeventd[1673387]: Starting cleanup for 101
Jul  9 03:05:05 stmx qmeventd[1673387]: Finished cleanup for 101
Jul  9 03:05:13 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:05:22 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:05:31 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:05:43 stmx pvestatd[3785]: storage 'Backup' is not online
Jul  9 03:05:53 stmx pvestatd[3785]: storage 'Backup' is not online
 
Last edited:
I'm joining your request, i got this evening the same incident, a VM being stopped by /usr/sbin/qmeventd.

Code:
Aug 15 18:50:01 pve1 systemd[1]: pvesr.service: Succeeded.
Aug 15 18:50:01 pve1 systemd[1]: Finished Proxmox VE replication runner.
Aug 15 18:50:09 pve1 QEMU[249525]: kvm: terminating on signal 15 from pid 856 (/usr/sbin/qmeventd)
Aug 15 18:50:09 pve1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Aug 15 18:50:09 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Aug 15 18:50:09 pve1 kernel: vmbr1: port 1(fwpr101p0) entered disabled state
Aug 15 18:50:09 pve1 kernel: device fwln101i0 left promiscuous mode
Aug 15 18:50:09 pve1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Aug 15 18:50:09 pve1 kernel: device fwpr101p0 left promiscuous mode
Aug 15 18:50:09 pve1 kernel: vmbr1: port 1(fwpr101p0) entered disabled state
Aug 15 18:50:09 pve1 systemd[1]: 101.scope: Succeeded.
Aug 15 18:50:09 pve1 systemd[1]: 101.scope: Consumed 13min 29.069s CPU time.
Aug 15 18:50:10 pve1 qmeventd[258147]: Starting cleanup for 101
Aug 15 18:50:10 pve1 qmeventd[258147]: Finished cleanup for 101
 
I used to a lot of problems, like the entire server getting stuck in boot loop. But now I disable the VM from starting on boot to avoid this problem. I played around with the settings quite a bit and notice that having guest agent in the VM seems to help and tinkering the settings to the point where entire server does not crash. I also updated to the latest possible kernel. Now my VM crashes when resuming from suspended state, and the server will crash if I initiate hibernate from proxmox. These are happening with Windows VM. Now at least Resume crashes more gracefully, without server boot. I suspect my problems are due to AMD ZEN 3 processor but I am not sure, eagerly awaiting kernel 5.12 and hoping better qemu stability.

Code:
Sep 29 14:11:17 NoobNoob pvedaemon[496156]: resume VM 100: UPID:NoobNoob:0007921C:0051997B:6154AC45:qmresume:100:root@pam:

Sep 29 14:11:17 NoobNoob pvedaemon[63263]: <root@pam> starting task UPID:NoobNoob:0007921C:0051997B:6154AC45:qmresume:100:root@pam:

Sep 29 14:11:17 NoobNoob pvedaemon[63263]: <root@pam> end task UPID:NoobNoob:0007921C:0051997B:6154AC45:qmresume:100:root@pam: OK

Sep 29 14:11:19 NoobNoob QEMU[266679]: kvm: terminating on signal 15 from pid 62879 (/usr/sbin/qmeventd)
 
Last edited:
Same thing Windows 11

Code:
Feb 08 14:19:45 pve QEMU[552623]: kvm: terminating on signal 15 from pid 8660 (/usr/sbin/qmeventd)
Feb 08 14:19:45 pve kernel: fwbr103i0: port 2(tap103i0) entered disabled state
Feb 08 14:19:45 pve kernel: fwbr103i0: port 1(fwln103i0) entered disabled state
Feb 08 14:19:45 pve kernel: vmbr0: port 4(fwpr103p0) entered disabled state
Feb 08 14:19:45 pve kernel: device fwln103i0 left promiscuous mode
Feb 08 14:19:45 pve kernel: fwbr103i0: port 1(fwln103i0) entered disabled state
Feb 08 14:19:45 pve kernel: device fwpr103p0 left promiscuous mode
Feb 08 14:19:45 pve kernel: vmbr0: port 4(fwpr103p0) entered disabled state
Feb 08 14:19:45 pve systemd[1]: 103.scope: Succeeded.
Feb 08 14:19:45 pve systemd[1]: 103.scope: Consumed 5min 47.191s CPU time.
Feb 08 14:19:46 pve qmeventd[874689]: Starting cleanup for 103
Feb 08 14:19:46 pve qmeventd[874689]: Finished cleanup for 103
 
Any solutions? recreate VM didn't resolve this problem.
win 10 guest with uefi bios and tpm 2.0

Code:
Sep 12 13:11:16 pvebussy kernel: [28801.353621] x86/split lock detection: #AC: CPU 2/KVM/1417 took a split_lock trap at address: 0xfffff8020eeca7df
Sep 12 13:12:31 pvebussy kernel: [28876.671687] fwbr103i0: port 2(tap103i0) entered disabled state
Sep 12 13:12:31 pvebussy kernel: [28876.696146] fwbr103i0: port 1(fwln103i0) entered disabled state
Sep 12 13:12:31 pvebussy kernel: [28876.696177] vmbr0: port 4(fwpr103p0) entered disabled state
Sep 12 13:12:31 pvebussy kernel: [28876.696468] device fwln103i0 left promiscuous mode
Sep 12 13:12:31 pvebussy kernel: [28876.696470] fwbr103i0: port 1(fwln103i0) entered disabled state
Sep 12 13:12:31 pvebussy kernel: [28876.718975] device fwpr103p0 left promiscuous mode
Sep 12 13:12:31 pvebussy kernel: [28876.718976] vmbr0: port 4(fwpr103p0) entered disabled state
Sep 12 13:12:31 pvebussy QEMU[65715]: kvm: terminating on signal 15 from pid 978 (/usr/sbin/qmeventd)
Sep 12 13:12:32 pvebussy qmeventd[104768]: Finished cleanup for 103
Sep 12 13:12:32 pvebussy qmeventd[104768]: Starting cleanup for 103
Sep 12 13:12:33 pvebussy systemd[1]: 103.scope: Consumed 4h 26min 36.747s CPU time.
Sep 12 13:12:33 pvebussy systemd[1]: 103.scope: Succeeded.
 
This has randomly happened to me now but only on one VM out of 20 others.
The crashes happened twice around one month spread apart.
The 20 VMs are in a cluster with two hosts.

The host that randomly crashes is a Windows 2019 Server.
Compared to the other hosts it has relatively high RAM (say 8 GB versus 4 GB).
I don't think it's RAM but not sure. Any clues to see if it's RAM with dmesg?

I'm also curios as to:

- What is `signal 15`?
- What is the role of `qmeventid`?

Here is the log file:

```
Sep 21 16:59:43 proxmox1 QEMU[2057567]: kvm: terminating on signal 15 from pid 1086 (/usr/sbin/qmeventd)
Sep 21 16:59:44 proxmox1 kernel: [13079587.474981] fwbr113i0: port 2(tap113i0) entered disabled state
Sep 21 16:59:44 proxmox1 kernel: [13079587.505845] fwbr113i0: port 1(fwln113i0) entered disabled state
Sep 21 16:59:44 proxmox1 kernel: [13079587.505950] vmbr0: port 10(fwpr113p0) entered disabled state
Sep 21 16:59:44 proxmox1 kernel: [13079587.506062] device fwln113i0 left promiscuous mode
Sep 21 16:59:44 proxmox1 kernel: [13079587.506064] fwbr113i0: port 1(fwln113i0) entered disabled state
Sep 21 16:59:44 proxmox1 kernel: [13079587.534504] device fwpr113p0 left promiscuous mode
Sep 21 16:59:44 proxmox1 kernel: [13079587.534509] vmbr0: port 10(fwpr113p0) entered disabled state
Sep 21 16:59:45 proxmox1 qmeventd[2819432]: Starting cleanup for 113
Sep 21 16:59:45 proxmox1 qmeventd[2819432]: Finished cleanup for 113
Sep 21 16:59:45 proxmox1 systemd[1]: 113.scope: Succeeded.
Sep 21 16:59:45 proxmox1 systemd[1]: 113.scope: Consumed 1d 12h 8min 18.911s CPU time.
```

RAM usage:

1663774784551.png

```
# cat /proc/sys/vm/swappiness
60
```
 
hi,

the message " kvm: terminating on signal 15 from pid 1086 (/usr/sbin/qmeventd)" is normal, as the qmeventd sends that signal (15 = SIGTERM) when the qemu process signals to us that it shutdown but waits for confirmation
i sent a patch yesterday[0] that handles it a bit different, so that it shouldn't log that line anymore (as it's more confusing than helpful)

in any case, this is probably not really the reason why the vm shuts down, but only a symptom

0: https://lists.proxmox.com/pipermail/pve-devel/2022-September/054056.html
 
  • Like
Reactions: eugenevdm
this is probably not really the reason why the vm shuts down, but only a symptom
Cool thank you so much for the reply. I interpret what you're saying is the shutdown could be caused by many different things and additional issue isolation must take place before the cause will be known.
 
The host that randomly crashes is a Windows 2019 Server.
Compared to the other hosts it has relatively high RAM (say 8 GB versus 4 GB).
I don't think it's RAM but not sure. Any clues to see if it's RAM with dmesg?
RAM usage:

1663774784551.png

Check your syslog for OOM (out of memory) events.
With such a high (too high?!) overall memory utilization and the fact, that it ever affects the VM with the probably highest memory utilization of all VMs (that is my guess; you have to confirm) would indicate for me that it is the OOM-killer.
 
Cool thank you so much for the reply. I interpret what you're saying is the shutdown could be caused by many different things and additional issue isolation must take place before the cause will be known.
yes that's what i'm saying
 
Hey,

ballooning ON on yours VM?
Thanks guys, I just did an audit.

Ballooning is on all the VMs, but I just noticed out of 14x VMs, 1x VM doesn't have the guest agent installed. I believe ballooning also needs the guest agent to be effective?

Interesting also with regards to OOM, before the host with the most RAM, 16GB, gave oom-killer after sudden death and I reduced RAM usage to 14GB, but this time, I didn't see any oom-killer in `/var/log/syslog` only what I posted.

I'm quite sure I'm pushing the host too hard. Upgrading host RAM is challenging but not impossible.

For now I'll focus on getting the guest agent installed on that one VM and reducing RAM where I can.

This isn't a major serious problem because it's only happened twice spread very wide apart and I could start the guest quickly enough. Thanks so much for all your help and expertise it really means a lot.
 
Hi @Thatoo I don't think your problem is similar to what is discussed here in this particular post.

What is discussed here is this log line:

> vm: terminating on signal 15 from pid 1086 (/usr/sbin/qmeventd)

What you mention on the forum post you provided is this log line:

> KVM: entry failed, hardware error 0x80000021

Always scan a log file for the most relevant message. This take time and practice. In a far as 0x80000021, here is at least one similar situation with lots of feedback:

https://forum.proxmox.com/threads/vm-shutdown-kvm-entry-failed-hardware-error-0x80000021.109410/
 
  • Like
Reactions: Thatoo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!