VM shutdown, KVM: entry failed, hardware error 0x80000021

itNGO

Well-Known Member
Jun 12, 2020
583
128
48
44
Germany
it-ngo.com
Anyone tried to remove the virtual TPM from their Windows Server 2022 guests and/or disable Secureboot?
 

bilalwaheedch

New Member
May 20, 2022
3
1
3
And here I was thinking that the issue was with how I had installed/configured Proxmox. Glad to know I am not alone!
 
Jan 29, 2021
11
1
8
32
Czech Republic
Can anyone point me in the right direction of this?

Weirdly enough I have 2x Proxmox Hosts - duplicate hardware & 2x 2022 VM's with the same configuration, and only 1 VM that has this issue regardless of host.
I made bash script for single node solution, checks VMs list one by one every 10 minutes with PING, if VM does not respond, it starts the VM with "qm start ..."
 
  • Like
Reactions: rursache

stefal

New Member
May 20, 2022
12
7
3
Anyone tried to remove the virtual TPM from their Windows Server 2022 guests and/or disable Secureboot?
Yes, makes no difference, crash after 10 minutes, between Installing updates and last step of OS install.
 
Dec 13, 2018
50
4
28
51
Interesting posts. Mine is crashing within a couple of hours of a 3:15AM backup ... every time.

I'll likely move to Win 2019 for this install but will watch this thread.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,659
973
173
The following ISO has this issue even during OS installation, with network not configured and internet not yet reachable. The only thing to select/customize is Datacenter with GUI (last in the list). It's the 180 day evaluation version of WS 2022. I doubt it already has any May updates.
https://software-download.microsoft...-1500.fe_release_SERVER_EVAL_x64FRE_en-us.iso
I tried this ISO (this is a quite old one, may 2021) on 5 different servers. I was able to see the issue only on one very old Xeon with BIOS from 2013. The identical server with an updated BIOS from 2018, seems to fix it.

But also on this server with the oldest bios and the crash, the current Windows 2022 ISO (march 2022, I tested with the english ISO from https://www.microsoft.com/en-us/evalcenter/download-windows-server-2022) installed without issues.
 

stefal

New Member
May 20, 2022
12
7
3
yes, it's very varying.
Xeon E5-2630 with microcode package installed: WS 2022 crashing almost instantly. W11 with May updates crashed after a day.
i7-4930K with microcode package installed: WS 2022 crashing once a day without Control flow guard disabled, once per 4 days with CFG disabled. W11 not tested on i7 yet.
 

stefal

New Member
May 20, 2022
12
7
3
Few more tests done.
Xeon E5-2630 with microcode package installed, VM with no TPM, no preenrolled keys: WS 2022 installation successful, crash after 1 hour.
Using manual startup with smm=off keeps the VM on but console keeps saying Guest has not initialized the display (yet). RDP that previously worked does not connect, I guess it's hung somewhere near POST.
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
7,760
1,280
169
Xeon E5-2630 with microcode package installed, VM with no TPM, no preenrolled keys: WS 2022 installation successful, crash after 1 hour.
Using manual startup with smm=off keeps the VM on but console keeps saying Guest has not initialized the display (yet). RDP that previously worked does not connect, I guess it's hung somewhere near POST.
do all those cases also come with the error-message:
KVM: entry failed, hardware error 0x80000021
from the first post (usually ends up in the journal)?
 

stefal

New Member
May 20, 2022
12
7
3
do all those cases also come with the error-message:

from the first post (usually ends up in the journal)?
WS 2022 installation successful, crash after 1 hour --> yes, 0x80000021 error
Using manual startup with smm=off keeps the VM on but console keeps saying Guest has not initialized the display (yet) --> no 0x80000021 error, the process is alive (even now) but VM is unresponsive
 

luckyluk83

New Member
Jul 14, 2021
8
4
3
39
I have 2698 v3 and after latest updates few days ago i have two errors.

1st is "status: io-error" when vm is installed on passed through sata controller (SAMBA). This can resolved with changing the Async IO to threads.

2nd is KVM: entry failed, hardware error 0x80000021 (haven't found work around yest)
 

itNGO

Well-Known Member
Jun 12, 2020
583
128
48
44
Germany
it-ngo.com
I have 2698 v3 and after latest updates few days ago i have two errors.

1st is "status: io-error" when vm is installed on passed through sata controller (SAMBA). This can resolved with changing the Async IO to threads.

2nd is KVM: entry failed, hardware error 0x80000021 (haven't found work around yest)
Pin Kernel Linux 5.13.19-6-pve should help for 2nd and "maybe" also for 1st problem....
For 1st test-repo pve-qemu-kvm: 6.2.0-8 maybe also be worth a try....
 

si458

Active Member
hey gang,
im having this same issue where i cant even now start up a blank fresh cloud-init Ubuntu 20.04 VM properly as its just killing the QEMU after like 30seconds for no reason?
was the any solution to this?
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
Code:
May 26 10:41:30 pve2 pvedaemon[1608]: <root@pam> starting task UPID:pve2:00005329:0004A102:628F4B4A:qmstart:109:root@pam:
May 26 10:41:30 pve2 pvedaemon[21289]: start VM 109: UPID:pve2:00005329:0004A102:628F4B4A:qmstart:109:root@pam:
May 26 10:41:31 pve2 systemd[1]: Started 109.scope.
May 26 10:41:31 pve2 systemd-udevd[21315]: Using default interface naming scheme 'v247'.
May 26 10:41:31 pve2 systemd-udevd[21315]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 26 10:41:32 pve2 kernel: [ 3036.185314] device tap109i0 entered promiscuous mode
May 26 10:41:32 pve2 kernel: [ 3036.203508] vmbr168: port 9(tap109i0) entered blocking state
May 26 10:41:32 pve2 kernel: [ 3036.203514] vmbr168: port 9(tap109i0) entered disabled state
May 26 10:41:32 pve2 kernel: [ 3036.203661] vmbr168: port 9(tap109i0) entered blocking state
May 26 10:41:32 pve2 kernel: [ 3036.203665] vmbr168: port 9(tap109i0) entered forwarding state
May 26 10:41:33 pve2 pvedaemon[1608]: <root@pam> end task UPID:pve2:00005329:0004A102:628F4B4A:qmstart:109:root@pam: OK
May 26 10:41:33 pve2 pvedaemon[21361]: starting vnc proxy UPID:pve2:00005371:0004A1E7:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:33 pve2 pvedaemon[1608]: <root@pam> starting task UPID:pve2:00005371:0004A1E7:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:33 pve2 pveproxy[20528]: proxy detected vanished client connection
May 26 10:41:33 pve2 pvedaemon[21372]: starting vnc proxy UPID:pve2:0000537C:0004A232:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:33 pve2 pvedaemon[1607]: <root@pam> starting task UPID:pve2:0000537C:0004A232:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:53 pve2 pvedaemon[1608]: <root@pam> end task UPID:pve2:00005371:0004A1E7:628F4B4D:vncproxy:109:root@pam: OK
May 26 10:41:59 pve2 pveproxy[1615]: worker 15501 finished
May 26 10:41:59 pve2 pveproxy[1615]: starting 1 worker(s)
May 26 10:41:59 pve2 pveproxy[1615]: worker 21498 started
May 26 10:42:01 pve2 pveproxy[21497]: got inotify poll request in wrong process - disabling inotify
May 26 10:42:18 pve2 QEMU[21332]: KVM: entry failed, hardware error 0x80000021
May 26 10:42:18 pve2 kernel: [ 3081.975359] set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.
May 26 10:42:18 pve2 QEMU[21332]: If you're running a guest on an Intel machine without unrestricted mode
May 26 10:42:18 pve2 QEMU[21332]: support, the failure can be most likely due to the guest entering an invalid
May 26 10:42:18 pve2 QEMU[21332]: state for Intel VT. For example, the guest maybe running in big real mode
May 26 10:42:18 pve2 QEMU[21332]: which is not supported on less recent Intel processors.
May 26 10:42:18 pve2 QEMU[21332]: EAX=0000010b EBX=00000000 ECX=00000000 EDX=00000000
May 26 10:42:18 pve2 QEMU[21332]: ESI=00157f58 EDI=0000010b EBP=00157f48 ESP=00157f38
May 26 10:42:18 pve2 QEMU[21332]: EIP=00008000 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=1 HLT=0
May 26 10:42:18 pve2 QEMU[21332]: ES =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: CS =c400 7ffc4000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: SS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: DS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: FS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: GS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: LDT=0000 00000000 000fffff 00000000
May 26 10:42:18 pve2 QEMU[21332]: TR =0040 000ae000 0000206f 00008b00
May 26 10:42:18 pve2 QEMU[21332]: GDT=     000ac000 0000007f
May 26 10:42:18 pve2 QEMU[21332]: IDT=     00000000 00000000
May 26 10:42:18 pve2 QEMU[21332]: CR0=00050032 CR2=96c59da4 CR3=732e8004 CR4=00000000
May 26 10:42:18 pve2 QEMU[21332]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
May 26 10:42:18 pve2 QEMU[21332]: DR6=00000000ffff0ff0 DR7=0000000000000400
May 26 10:42:18 pve2 QEMU[21332]: EFER=0000000000000000
May 26 10:42:18 pve2 QEMU[21332]: Code=kvm: ../hw/core/cpu-sysemu.c:77: cpu_asidx_from_attrs: Assertion `ret < cpu->num_ases && ret >= 0' failed.
May 26 10:42:18 pve2 kernel: [ 3082.013568] vmbr168: port 9(tap109i0) entered disabled state
May 26 10:42:18 pve2 kernel: [ 3082.013815] vmbr168: port 9(tap109i0) entered disabled state
May 26 10:42:18 pve2 systemd[1]: 109.scope: Succeeded.
May 26 10:42:18 pve2 systemd[1]: 109.scope: Consumed 1min 1.061s CPU time.
May 26 10:42:19 pve2 pvedaemon[1607]: <root@pam> end task UPID:pve2:0000537C:0004A232:628F4B4D:vncproxy:109:root@pam: OK
May 26 10:42:19 pve2 pveproxy[21497]: worker exit
May 26 10:42:20 pve2 qmeventd[21623]: Starting cleanup for 109
May 26 10:42:20 pve2 qmeventd[21623]: Finished cleanup for 109
 
Jan 29, 2021
11
1
8
32
Czech Republic
hey gang,
im having this same issue where i cant even now start up a blank fresh cloud-init Ubuntu 20.04 VM properly as its just killing the QEMU after like 30seconds for no reason?
was the any solution to this?
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
Code:
May 26 10:41:30 pve2 pvedaemon[1608]: <root@pam> starting task UPID:pve2:00005329:0004A102:628F4B4A:qmstart:109:root@pam:
May 26 10:41:30 pve2 pvedaemon[21289]: start VM 109: UPID:pve2:00005329:0004A102:628F4B4A:qmstart:109:root@pam:
May 26 10:41:31 pve2 systemd[1]: Started 109.scope.
May 26 10:41:31 pve2 systemd-udevd[21315]: Using default interface naming scheme 'v247'.
May 26 10:41:31 pve2 systemd-udevd[21315]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 26 10:41:32 pve2 kernel: [ 3036.185314] device tap109i0 entered promiscuous mode
May 26 10:41:32 pve2 kernel: [ 3036.203508] vmbr168: port 9(tap109i0) entered blocking state
May 26 10:41:32 pve2 kernel: [ 3036.203514] vmbr168: port 9(tap109i0) entered disabled state
May 26 10:41:32 pve2 kernel: [ 3036.203661] vmbr168: port 9(tap109i0) entered blocking state
May 26 10:41:32 pve2 kernel: [ 3036.203665] vmbr168: port 9(tap109i0) entered forwarding state
May 26 10:41:33 pve2 pvedaemon[1608]: <root@pam> end task UPID:pve2:00005329:0004A102:628F4B4A:qmstart:109:root@pam: OK
May 26 10:41:33 pve2 pvedaemon[21361]: starting vnc proxy UPID:pve2:00005371:0004A1E7:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:33 pve2 pvedaemon[1608]: <root@pam> starting task UPID:pve2:00005371:0004A1E7:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:33 pve2 pveproxy[20528]: proxy detected vanished client connection
May 26 10:41:33 pve2 pvedaemon[21372]: starting vnc proxy UPID:pve2:0000537C:0004A232:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:33 pve2 pvedaemon[1607]: <root@pam> starting task UPID:pve2:0000537C:0004A232:628F4B4D:vncproxy:109:root@pam:
May 26 10:41:53 pve2 pvedaemon[1608]: <root@pam> end task UPID:pve2:00005371:0004A1E7:628F4B4D:vncproxy:109:root@pam: OK
May 26 10:41:59 pve2 pveproxy[1615]: worker 15501 finished
May 26 10:41:59 pve2 pveproxy[1615]: starting 1 worker(s)
May 26 10:41:59 pve2 pveproxy[1615]: worker 21498 started
May 26 10:42:01 pve2 pveproxy[21497]: got inotify poll request in wrong process - disabling inotify
May 26 10:42:18 pve2 QEMU[21332]: KVM: entry failed, hardware error 0x80000021
May 26 10:42:18 pve2 kernel: [ 3081.975359] set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.
May 26 10:42:18 pve2 QEMU[21332]: If you're running a guest on an Intel machine without unrestricted mode
May 26 10:42:18 pve2 QEMU[21332]: support, the failure can be most likely due to the guest entering an invalid
May 26 10:42:18 pve2 QEMU[21332]: state for Intel VT. For example, the guest maybe running in big real mode
May 26 10:42:18 pve2 QEMU[21332]: which is not supported on less recent Intel processors.
May 26 10:42:18 pve2 QEMU[21332]: EAX=0000010b EBX=00000000 ECX=00000000 EDX=00000000
May 26 10:42:18 pve2 QEMU[21332]: ESI=00157f58 EDI=0000010b EBP=00157f48 ESP=00157f38
May 26 10:42:18 pve2 QEMU[21332]: EIP=00008000 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=1 HLT=0
May 26 10:42:18 pve2 QEMU[21332]: ES =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: CS =c400 7ffc4000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: SS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: DS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: FS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: GS =0000 00000000 ffffffff 00809300
May 26 10:42:18 pve2 QEMU[21332]: LDT=0000 00000000 000fffff 00000000
May 26 10:42:18 pve2 QEMU[21332]: TR =0040 000ae000 0000206f 00008b00
May 26 10:42:18 pve2 QEMU[21332]: GDT=     000ac000 0000007f
May 26 10:42:18 pve2 QEMU[21332]: IDT=     00000000 00000000
May 26 10:42:18 pve2 QEMU[21332]: CR0=00050032 CR2=96c59da4 CR3=732e8004 CR4=00000000
May 26 10:42:18 pve2 QEMU[21332]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
May 26 10:42:18 pve2 QEMU[21332]: DR6=00000000ffff0ff0 DR7=0000000000000400
May 26 10:42:18 pve2 QEMU[21332]: EFER=0000000000000000
May 26 10:42:18 pve2 QEMU[21332]: Code=kvm: ../hw/core/cpu-sysemu.c:77: cpu_asidx_from_attrs: Assertion `ret < cpu->num_ases && ret >= 0' failed.
May 26 10:42:18 pve2 kernel: [ 3082.013568] vmbr168: port 9(tap109i0) entered disabled state
May 26 10:42:18 pve2 kernel: [ 3082.013815] vmbr168: port 9(tap109i0) entered disabled state
May 26 10:42:18 pve2 systemd[1]: 109.scope: Succeeded.
May 26 10:42:18 pve2 systemd[1]: 109.scope: Consumed 1min 1.061s CPU time.
May 26 10:42:19 pve2 pvedaemon[1607]: <root@pam> end task UPID:pve2:0000537C:0004A232:628F4B4D:vncproxy:109:root@pam: OK
May 26 10:42:19 pve2 pveproxy[21497]: worker exit
May 26 10:42:20 pve2 qmeventd[21623]: Starting cleanup for 109
May 26 10:42:20 pve2 qmeventd[21623]: Finished cleanup for 109
pve-kernel-5.13.19-6-pve seems to be stable, try switching to that with " https://pve.proxmox.com/wiki/Host_Bootloader "
 

si458

Active Member
pve-kernel-5.13.19-6-pve seems to be stable, try switching to that with " https://pve.proxmox.com/wiki/Host_Bootloader "
not sure how to switch as the guide tells me to format a partition which i dont feel confident to do,
however i do have both kernels installed weirdly enough?
Code:
root@pve2:/boot# dpkg --list | grep pve-kernel-5.
ii  pve-kernel-5.13                      7.1-9                                  all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.13.19-6-pve             5.13.19-15                             amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.15                      7.2-3                                  all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.15.35-1-pve             5.15.35-3                              amd64        The Proxmox PVE Kernel Image
 

daros

Active Member
Jul 22, 2014
50
1
33
not sure how to switch as the guide tells me to format a partition which i dont feel confident to do,
however i do have both kernels installed weirdly enough?
Code:
root@pve2:/boot# dpkg --list | grep pve-kernel-5.
ii  pve-kernel-5.13                      7.1-9                                  all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.13.19-6-pve             5.13.19-15                             amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.15                      7.2-3                                  all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.15.35-1-pve             5.15.35-3                              amd64        The Proxmox PVE Kernel Image

Here is what i did.

proxmox-boot-tool kernel list proxmox-boot-tool kernel pin 5.13.19-6-pve reboot
 
  • Like
Reactions: Altrove

si458

Active Member
Here is what i did.

proxmox-boot-tool kernel list proxmox-boot-tool kernel pin 5.13.19-6-pve reboot
was just about to post saying i got it working and did just that and it started a few of my VMs straight away, unlike the other kernel

the docs really need changing saying BIOS OR UEFI, if BIOS skip all the rubbish below and go straight to pin section
 
Last edited:

stefal

New Member
May 20, 2022
12
7
3
Here is what i did.

proxmox-boot-tool kernel list proxmox-boot-tool kernel pin 5.13.19-6-pve reboot
If the 5.13 kernel is not in the list then install it with
Code:
apt install pve-kernel-5.13.19-6-pve
Will watch my VMs for a week to see if it really helped
 
  • Like
Reactions: Altrove

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!