[SOLVED] Another frozen Windows VM issue

BORNXenon

Active Member
Oct 21, 2019
32
5
28
43
I've seen lots of threads about Windows VMs freezing on version 7.1 with the resolution being to use Virtio Scsi.
I have a similar issue with a Windows Server 2019 VM and it IS using Virtio Scsi. It started out of the blue one day at 10am, the VM became unresponsive with the following in the Host syslog:

Code:
VM 102 qmp command failed - VM 102 qmp command 'guest-network-get-interfaces' failed - got timeout

With the following logged several times after that:

Code:
VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout

Approximately 10 minutes later, the VM started responding and worked as it should for the rest of the day, the next day however, again at 10am the same thing happened.
Following a reboot of the VM and an update of the guest drivers it has been fine for a week until today where it has started freezing again except it has done it every hour for 10 minutes at a time.

I have tried moving the VM to another host in the cluster to no avail.

Code:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-5 (running version: 7.1-5/6fe299a0)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-1-pve: 5.13.19-2
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 16.2.6-pve2
ceph-fuse: 16.2.6-pve2
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

1644410747260.png

Any suggestions?
 
hi,

Approximately 10 minutes later, the VM started responding and worked as it should for the rest of the day, the next day however, again at 10am the same thing happened.
do you happen to have backup jobs setup for this VM?

also could you post your full VM configuration? qm config VMID (VMID = 102 for you)

also have you noticed any other slowdowns or freezes in other guests on this host?
I have tried moving the VM to another host in the cluster to no avail.
are the PVE package versions on the same levels? please check pveversion -v output on all nodes

and make sure to see our best practices for windows server [0]

[0]: https://pve.proxmox.com/wiki/Windows_2019_guest_best_practices
 
hey,

Did you tried without PVE firewall enabled on your virtual ethernet card?

Another think, declare CPU flags accordingly your ADM/intel configuration?

Bests regards,
 
hi,


do you happen to have backup jobs setup for this VM?

also could you post your full VM configuration? qm config VMID (VMID = 102 for you)

also have you noticed any other slowdowns or freezes in other guests on this host?

are the PVE package versions on the same levels? please check pveversion -v output on all nodes

and make sure to see our best practices for windows server [0]

[0]: https://pve.proxmox.com/wiki/Windows_2019_guest_best_practices
I do have backup jobs set for this server, but they don't run until 7pm.
The freezing has occurred today at 10am, 11am, 12pm but interestingly not at 1pm.

VM Config:
Code:
qm config 102
agent: 1
boot: order=scsi0;ide2;net0
cores: 4
ide2: none,media=cdrom
machine: pc-i440fx-6.0
memory: 8192
name: MRWS1
net0: virtio=EE:BC:B9:B4:4B:A2,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: rbd:vm-102-disk-0,discard=on,size=160G
scsihw: virtio-scsi-pci
smbios1: uuid=c89fb360-f484-40cb-ba78-7d3c195f3a66
sockets: 1
vmgenid: d1ded92e-bfa0-4ee2-a873-5ce9e45108e5

All hosts are at the same level, its a 3 node cluster running CEPH.

I've just been reading another thread with a similar issue:

https://forum.proxmox.com/threads/problems-after-upgrade-from-pve-6-to-pve-7.102852/#post-445939

In that thread, @Fabian_E states that there is a fix in pve-qemu-kvm: 6.1.0-3 I have noticed I am running pve-qemu-kvm: 6.1.0-2, do you think it's worth a try to update?
 
In that thread, @Fabian_E states that there is a fix in pve-qemu-kvm: 6.1.0-3 I have noticed I am running pve-qemu-kvm: 6.1.0-2, do you think it's worth a try to update?
yes please try upgrading :)
 
Cluster upgraded with no issue, I'll keep a close eye on the VM and report back.
Incidentally, is it worth changing the Async I/O? I've seen it recommended in several threads but I don't like changing things when I don't know what it will do!
 
2 days on and no further freezes. I'll keep an eye on it all next week and if the issue is solved and there are no further issues, I'll update my other cluster!
 
  • Like
Reactions: oguz
2 days on and no further freezes. I'll keep an eye on it all next week and if the issue is solved and there are no further issues, I'll update my other cluster!
that's great. if the issue doesn't reappear, please mark the thread also as [SOLVED] so others know what to expect too :)
 
that's great. if the issue doesn't reappear, please mark the thread also as [SOLVED] so others know what to expect too :)
Unfortunately the issue is not solved. 10am today and the VM has frozen, although it doesn't appear to have been for as long as previous.
Code:
VM 102 qmp command failed - VM 102 qmp command 'guest-network-get-interfaces' failed - got timeout
VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout

As above, is it worth changing the Async I/O setting?
 
Unfortunately the issue is not solved. 10am today and the VM has frozen, although it doesn't appear to have been for as long as previous.
Code:
VM 102 qmp command failed - VM 102 qmp command 'guest-network-get-interfaces' failed - got timeout
VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout

As above, is it worth changing the Async I/O setting?
hmm... could you post your pveversion -v output after the upgrades?

you can of course try changing the async io setting to see if it makes any difference.

do you see anything else in the journals while this happens?
 
PVE version details:
Code:
root@mrpve2:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-10
pve-kernel-5.13: 7.1-7
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.11.22-3-pve: 5.11.22-7
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 16.2.7
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-2
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-5
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-1
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

Syslog from around the time of the freeze:
Code:
Feb 14 09:57:34 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 10:06:54 mrpve2 pvedaemon[3166381]: VM 102 qmp command failed - VM 102 qmp command 'guest-network-get-interfaces' failed - got timeout
Feb 14 10:06:56 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 10:06:56 mrpve2 sshd[2798437]: Accepted publickey for root from 10.0.1.201 port 54830 ssh2: RSA SHA256:MV6WQ3knVmq+Laa1b45vZ0IWsm8IcKHHQEuOTwV/Bg0
Feb 14 10:06:56 mrpve2 sshd[2798437]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Feb 14 10:06:56 mrpve2 systemd[1]: Created slice User Slice of UID 0.
Feb 14 10:06:56 mrpve2 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 14 10:06:56 mrpve2 systemd-logind[1661]: New session 204 of user root.
Feb 14 10:06:56 mrpve2 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 14 10:06:56 mrpve2 systemd[1]: Starting User Manager for UID 0...
Feb 14 10:06:56 mrpve2 systemd[2798440]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Feb 14 10:06:56 mrpve2 systemd[2798445]: gpgconf: error running '/usr/lib/gnupg/scdaemon': probably not installed
Feb 14 10:06:56 mrpve2 systemd[2798440]: Queued start job for default target Main User Target.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Created slice User Application Slice.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Paths.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Timers.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG network certificate management daemon.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent and passphrase cache.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Sockets.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Basic System.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Main User Target.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Startup finished in 268ms.
Feb 14 10:06:56 mrpve2 systemd[1]: Started User Manager for UID 0.
Feb 14 10:06:56 mrpve2 systemd[1]: Started Session 204 of user root.
Feb 14 10:07:13 mrpve2 pvedaemon[3181387]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 14 10:07:32 mrpve2 pvedaemon[3267123]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 14 10:10:37 mrpve2 pmxcfs[3015]: [dcdb] notice: data verification successful
Feb 14 10:12:35 mrpve2 pmxcfs[3015]: [status] notice: received log

Journal since 9am:
Code:
root@mrpve2:~# journalctl --since "2022-02-14 09:00:00" --no-pager
-- Journal begins at Thu 2021-08-05 12:29:54 BST, ends at Mon 2022-02-14 10:27:35 GMT. --
Feb 14 09:10:37 mrpve2 pmxcfs[3015]: [dcdb] notice: data verification successful
Feb 14 09:12:54 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 09:17:01 mrpve2 CRON[2688712]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 14 09:17:01 mrpve2 CRON[2688713]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 14 09:17:01 mrpve2 CRON[2688712]: pam_unix(cron:session): session closed for user root
Feb 14 09:27:32 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 09:42:33 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 09:57:34 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 10:06:54 mrpve2 pvedaemon[3166381]: VM 102 qmp command failed - VM 102 qmp command 'guest-network-get-interfaces' failed - got timeout
Feb 14 10:06:56 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 10:06:56 mrpve2 sshd[2798437]: Accepted publickey for root from 10.0.1.201 port 54830 ssh2: RSA SHA256:MV6WQ3knVmq+Laa1b45vZ0IWsm8IcKHHQEuOTwV/Bg0
Feb 14 10:06:56 mrpve2 sshd[2798437]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Feb 14 10:06:56 mrpve2 systemd[1]: Created slice User Slice of UID 0.
Feb 14 10:06:56 mrpve2 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 14 10:06:56 mrpve2 systemd-logind[1661]: New session 204 of user root.
Feb 14 10:06:56 mrpve2 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 14 10:06:56 mrpve2 systemd[1]: Starting User Manager for UID 0...
Feb 14 10:06:56 mrpve2 systemd[2798440]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Feb 14 10:06:56 mrpve2 systemd[2798445]: gpgconf: error running '/usr/lib/gnupg/scdaemon': probably not installed
Feb 14 10:06:56 mrpve2 systemd[2798440]: Queued start job for default target Main User Target.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Created slice User Application Slice.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Paths.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Timers.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG network certificate management daemon.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Feb 14 10:06:56 mrpve2 systemd[2798440]: Listening on GnuPG cryptographic agent and passphrase cache.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Sockets.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Basic System.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Reached target Main User Target.
Feb 14 10:06:56 mrpve2 systemd[2798440]: Startup finished in 268ms.
Feb 14 10:06:56 mrpve2 systemd[1]: Started User Manager for UID 0.
Feb 14 10:06:56 mrpve2 systemd[1]: Started Session 204 of user root.
Feb 14 10:07:13 mrpve2 pvedaemon[3181387]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 14 10:07:32 mrpve2 pvedaemon[3267123]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 14 10:10:37 mrpve2 pmxcfs[3015]: [dcdb] notice: data verification successful
Feb 14 10:12:35 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 14 10:17:01 mrpve2 CRON[2821823]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 14 10:17:01 mrpve2 CRON[2821824]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 14 10:17:01 mrpve2 CRON[2821823]: pam_unix(cron:session): session closed for user root
Feb 14 10:20:24 mrpve2 sshd[2829329]: Accepted password for root from 10.0.1.101 port 57969 ssh2
Feb 14 10:20:24 mrpve2 sshd[2829329]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Feb 14 10:20:24 mrpve2 systemd-logind[1661]: New session 207 of user root.
Feb 14 10:20:24 mrpve2 systemd[1]: Started Session 207 of user root.
Feb 14 10:27:35 mrpve2 pmxcfs[3015]: [status] notice: received log
 
Todays syslog...
Code:
Feb 15 09:17:52 mrpve2 ceph-osd[3161]: 2022-02-15T09:17:52.799+0000 7fa37ba17700 -1 osd.6 pg_epoch: 11989 pg[3.ed( v 11989'4124822 (11971'4121884,11989'4124822] local-lis/les=11859/11860 n=446 ec=827/54 lis/c=11859/11859 les/c/f=11860/11860/0 sis=11859) [6,3,12] r=0 lpr=11859 crt=11989'4124822 lcod 11989'4124821 mlcod 11989'4124821 active+clean]  scrubber pg(3.ed) handle_scrub_reserve_grant: received unsolicited reservation grant from osd 12 (0x558c3f2b8dc0)
Feb 15 09:22:32 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 15 09:37:32 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 15 09:52:32 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 15 10:07:21 mrpve2 pvedaemon[3231224]: VM 102 qmp command failed - VM 102 qmp command 'guest-network-get-interfaces' failed - got timeout
Feb 15 10:07:32 mrpve2 pmxcfs[3015]: [status] notice: received log
Feb 15 10:07:40 mrpve2 pvedaemon[3264961]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 15 10:08:16 mrpve2 pvedaemon[3231224]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 15 10:09:50 mrpve2 pvedaemon[3183817]: VM 102 qmp command failed - VM 102 qmp command 'guest-ping' failed - got timeout
Feb 15 10:10:37 mrpve2 pmxcfs[3015]: [dcdb] notice: data verification successful
Feb 15 10:12:03 mrpve2 pvedaemon[3183817]: worker exit
Feb 15 10:12:03 mrpve2 pvedaemon[4218]: worker 3183817 finished
Feb 15 10:12:03 mrpve2 pvedaemon[4218]: starting 1 worker(s)
Feb 15 10:12:03 mrpve2 pvedaemon[4218]: worker 1843837 started

Not had chance to change the Async IO yet, I'm running Default (no cache) so I believe that I need to change the Async IO to 'native', can anyone confirm?
 
could you attempt to install the opt-in 5.15 kernel? [0] apt update && apt install pve-kernel-5.15 followed by a reboot should do it.

[0]: https://forum.proxmox.com/threads/opt-in-linux-kernel-5-15-for-proxmox-ve-7-x-available.100936/
Thanks, I've been keeping my eye on that thread and seen the various issues some people have experienced, as this cluster is in production, I'd rather not use it as a testbed!

I'm still not convinced that the issue isn't with the VM itself. This particular VM has been running without issue since September 2021, it only started the daily freezing around the 20th Jan.
I have a second production cluster at the same PVEVersion with 2 Windows 2012R2 servers on it and there have been no freezes from either of those VMs. Although there is a Windows10 VM on that cluster that is used for testing which is painfully slow for no reason I can see.

I think I'm going to try the Async IO change first and failing that run up another Windows Server 2019 VM to see if it suffers the same issues while I wait for the official release of the 5.15 kernel.
 
No freeze at 10am today.
This may just be because I rebooted the VM, or it may be that one of the things I did yesterday has solved the issue, I will wait and see!
So, what I did:

Updated the Trend AntiVirus Security Server software that runs on this VM to the latest patch.
Installed all the outstanding Windows updates on the VM and rebooted.
Shutdown the server, changed the Async IO to 'native' and restarted.
 
UPDATE: 2 weeks on and no more freezes.
I'm hesitant to say it, but looks like the issue may be solved.
Of the 3 things I did (AV update, Windows update, Async IO), I'm pretty sure that it was changing the Async IO to native that fixed it. The reason I say this is that I installed another Windows 2019 VM for testing, it was just a base OS install with no additional software and it displayed the same symptoms. Changing the Async IO on that VM also solved it's freezing issue.
 
I'm hesitant to say it, but looks like the issue may be solved.
Of the 3 things I did (AV update, Windows update, Async IO), I'm pretty sure that it was changing the Async IO to native that fixed it. The reason I say this is that I installed another Windows 2019 VM for testing, it was just a base OS install with no additional software and it displayed the same symptoms. Changing the Async IO on that VM also solved it's freezing issue.
thank you for the feedback! glad that the issue went away :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!