VERY strange puzzle points to Proxmox bug? Help me diagnose please!

MrPete

Active Member
Aug 6, 2021
123
57
33
66
This is one of the strangest issues in my 50+ years of computing. Your help, ideas, diagnostic or solution suggestions most welcome!

CONTEXT
  • I have a pair of almost-the-same Proxmox VM hosts. (HP Z2 G5 SFF, well loaded.)
  • Both have NVMe 1TB ZFS mirrored primary storage
  • Host 2 is MUCH less busy than Host 1 at present
  • All but one of the Host 2 VM's have been shut down since late March or earlier.

OBVIOUS SYMPTOM
  • Every Saturday at 00:20 +/- a few minutes, Host 2 loses NVMe mirror drive #2, gone w/o apparent error (other than being gone.) ZFS notes the pool is degraded.
  • On reboot, the pool resilvers perfectly w/o error and all is well for another week.

OTHER DIAGNOSTICS AND TESTS DONE
  • Swapping NVMe sticks? Same slot is gone. So this happens w/ any NVMe stick. (Perhaps of interest: my sticks are slightly different size, but the mirror is fine.)
  • Ran many HW diagnostics w/o error.
  • There are no cron jobs at that time.
  • There is no special network traffic at that time.
  • Tried playing w/ PCIe rescans etc and found nothing.

EVIDENCE POINTING TO PROXMOX IN SOME WAY

Careful examination of kern.log and syslog shows a 100% correlation to pvescheduler VM backup activity. AND I see some potentially strange side effects related to that backup task:

syslog
  • VM 712 finishes backup
  • VM 790 begins backup
  • 64ms later, nvme disappears
kern.log
  • Kernel activity on tap port related to VM 712 wraps up
  • NVMe disappears
  • Kernel activity on tap port related to VM 790

This exact sequence is always there. The timing varies a bit due to other activity, but this sequence is constant

My questions

  • Has anything like this been seen by others (online searches have not helped)
  • Any ideas how to further diagnose?
  • (Note: I hesitate to do massive reloads. I'd rather help find the bug than simply obscure it / make it disappear.)
THANK YOU!!!
Pete

Data Details below

pvescheduler file (/etc/pve/jobs.cfg)
vzdump: 3f9a084ec4febd6aa668de2f3f6d838fba2de390:1
schedule sat 00:00
all 1
compress zstd
enabled 1
mailnotification always
mode snapshot
quiet 1
storage nas-share

vzdump: 648efa5c74376df725b01f7cdac5863435aa48bd:1
schedule sat 00:00
all 1
compress zstd
enabled 0
mailnotification always
mode snapshot
quiet 1
storage nas-share


6 April, syslog
2024-04-06T00:19:29.849849-06:00 pve2a qmeventd[1011121]: Finished cleanup for 712
2024-04-06T00:19:32.783427-06:00 pve2a pvescheduler[1001041]: INFO: Finished Backup of VM 712 (00:03:09)
2024-04-06T00:19:32.819516-06:00 pve2a pvescheduler[1001041]: INFO: Starting Backup of VM 790 (qemu)

2024-04-06T00:19:32.883951-06:00 pve2a kernel: [197237.341987] zio pool=rpool vdev=/dev/disk/by-id/nvme-CT1000T500SSD5_234344CF32EB-part3 error=5 type=1 offset=270336 size=8192 flags=721601

2024-04-06T00:19:32.897159-06:00 pve2a zed: eid=31 class=statechange pool='rpool' vdev=nvme-CT1000T500SSD5_234344CF32EB-part3 vdev_state=REMOVED
2024-04-06T00:19:32.897420-06:00 pve2a zed: eid=32 class=removed pool='rpool' vdev=nvme-CT1000T500SSD5_234344CF32EB-part3 vdev_state=REMOVED
2024-04-06T00:19:32.915254-06:00 pve2a zed: eid=33 class=config_sync pool='rpool'
6 April, kern.log
2024-04-06T00:16:24.579990-06:00 pve2a kernel: [197049.035037] tap712i0: entered allmulticast mode
2024-04-06T00:16:24.579991-06:00 pve2a kernel: [197049.035386] vmbr0: port 3(tap712i0) entered blocking state
2024-04-06T00:16:24.579991-06:00 pve2a kernel: [197049.035675] vmbr0: port 3(tap712i0) entered forwarding state
2024-04-06T00:19:29.251997-06:00 pve2a kernel: [197233.709327] tap712i0: left allmulticast mode
2024-04-06T00:19:29.252009-06:00 pve2a kernel: [197233.709638] vmbr0: port 3(tap712i0) entered disabled state

2024-04-06T00:19:32.883951-06:00 pve2a kernel: [197237.341987] zio pool=rpool vdev=/dev/disk/by-id/nvme-CT1000T500SSD5_234344CF32EB-part3 error=5 type=1 offset=270336 size=8192 flags=721601

2024-04-06T00:19:33.815994-06:00 pve2a kernel: [197238.273119] tap790i0: entered promiscuous mode
2024-04-06T00:19:33.828143-06:00 pve2a kernel: [197238.286229] vmbr0: port 3(tap790i0) entered blocking state
2024-04-06T00:19:33.828151-06:00 pve2a kernel: [197238.286530] vmbr0: port 3(tap790i0) entered disabled state
2024-04-06T00:19:33.831977-06:00 pve2a kernel: [197238.286852] tap790i0: entered allmulticast mode
2024-04-06T00:19:33.832007-06:00 pve2a kernel: [197238.287214] vmbr0: port 3(tap790i0) entered blocking state
2024-04-06T00:19:33.832009-06:00 pve2a kernel: [197238.287510] vmbr0: port 3(tap790i0) entered forwarding state
2024-04-06T00:22:28.144029-06:00 pve2a kernel: [197412.601109] tap790i0: left allmulticast mode
2024-04-06T00:22:28.144040-06:00 pve2a kernel: [197412.601421] vmbr0: port 3(tap790i0) entered disabled state
2024-04-06T00:22:31.131973-06:00 pve2a kernel: [197415.590766] tap791i0: entered promiscuous mode
2024-04-06T00:22:31.147942-06:00 pve2a kernel: [197415.604230] vmbr0: port 3(tap791i0) entered blocking state
4 May, syslog
2024-05-04T00:20:57.864497-06:00 pve2a pvescheduler[2427079]: INFO: Finished Backup of VM 712 (00:03:13)
2024-05-04T00:20:57.875064-06:00 pve2a pvescheduler[2427079]: INFO: Starting Backup of VM 790 (qemu)

2024-05-04T00:20:57.945317-06:00 pve2a kernel: [448856.898381] zio pool=rpool vdev=/dev/disk/by-id/nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 error=5 type=1 offset=270336 size=8192 flags=721601
2024-05-04T00:20:57.945328-06:00 pve2a kernel: [448856.898392] zio pool=rpool vdev=/dev/disk/by-id/nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 error=5 type=1 offset=1023671017472 size=8192 flags=721601
2024-05-04T00:20:57.945329-06:00 pve2a kernel: [448856.898397] zio pool=rpool vdev=/dev/disk/by-id/nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 error=5 type=1 offset=1023671279616 size=8192 flags=721601

2024-05-04T00:20:57.953044-06:00 pve2a zed: eid=18 class=statechange pool='rpool' vdev=nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 vdev_state=REMOVED
2024-05-04T00:20:57.953148-06:00 pve2a zed: eid=19 class=removed pool='rpool' vdev=nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 vdev_state=REMOVED
2024-05-04T00:20:57.961195-06:00 pve2a zed: eid=20 class=config_sync pool='rpool'
4 May, kern.log
2024-05-04T00:17:45.112362-06:00 pve2a kernel: [448664.064154] vmbr0: port 2(tap712i0) entered forwarding state
2024-05-04T00:20:56.163498-06:00 pve2a kernel: [448855.116201] tap712i0: left allmulticast mode
2024-05-04T00:20:56.163507-06:00 pve2a kernel: [448855.116219] vmbr0: port 2(tap712i0) entered disabled state

2024-05-04T00:20:57.945317-06:00 pve2a kernel: [448856.898381] zio pool=rpool vdev=/dev/disk/by-id/nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 error=5 type=1 offset=270336 size=8192 flags=721601
2024-05-04T00:20:57.945328-06:00 pve2a kernel: [448856.898392] zio pool=rpool vdev=/dev/disk/by-id/nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 error=5 type=1 offset=1023671017472 size=8192 flags=721601
2024-05-04T00:20:57.945329-06:00 pve2a kernel: [448856.898397] zio pool=rpool vdev=/dev/disk/by-id/nvme-SAMSUNG_MZVL21T0HCLR-00BH1_S641NF0T407505-part3 error=5 type=1 offset=1023671279616 size=8192 flags=721601

2024-05-04T00:20:58.854316-06:00 pve2a kernel: [448857.807202] tap790i0: entered promiscuous mode
2024-05-04T00:20:58.867310-06:00 pve2a kernel: [448857.820449] vmbr0: port 2(tap790i0) entered blocking state
2024-05-04T00:20:58.867318-06:00 pve2a kernel: [448857.820454] vmbr0: port 2(tap790i0) entered disabled state
2024-05-04T00:20:58.867319-06:00 pve2a kernel: [448857.820463] tap790i0: entered allmulticast mode
2024-05-04T00:20:58.867320-06:00 pve2a kernel: [448857.820511] vmbr0: port 2(tap790i0) entered blocking state
2024-05-04T00:20:58.867321-06:00 pve2a kernel: [448857.820514] vmbr0: port 2(tap790i0) entered forwarding state
2024-05-04T00:26:36.809517-06:00 pve2a kernel: [449195.763577] tap790i0: left allmulticast mode
 
Did the issue start recently? Are you using backup Fleecing feature?
It started the first Saturday after I added the second NVMe/mirror in early April. (Since it only happens once a week, it took a while to eliminate seemingly more obvious causes. My first thought was hardware issues... ;) )

I didn't know fleecing existed, so no ;) ... AND remember: I can't imagine fleecing impacting this anyway, since the VM's being backed up when this happens are shut down, and have been since March. (I suspect the backup job file I shared above would say something about fleecing if it were enabled?)

Which reminds me: I don't understand: why would Proxmox play with networking of a stopped VM before/after backup? The VM's have been shut down for a long time! Looks like ProxMox is activating networking for a shut-down VM?!
 
Last edited:
Looks like ProxMox is activating networking for a shut-down VM?!
Yes, the VM needs to be started because storage and qemu need to be activated. If the VM was shut down, it will be started in a "paused" state, ie not fully up.

Keep in mind that backup is done via a QEMU filter, if VM/Qemu is not running - there is no filter. Plus it has to be storage-independent. So whether it's a qcow, a Ceph volume, ZFS, or thick/thin LVM - the disk needs to be activated on the source PVE host.

The activation portion is left up to a specific storage plugin. It may not be doing much in the Qcow case and a lot in the thick LVM case.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: MrPete
Yes, the VM needs to be started because storage and qemu need to be activated. If the VM was shut down, it will be started in a "paused" state, ie not fully up.

Keep in mind that backup is done via a QEMU filter...
So perhaps there's an unusual bug related to this... talk about obscure.

May need to deep-dive on debug levels of the process.
 
Hi, @MrPete ,

Could you try to disable zed service? I have see in the past some problem with this service. As I can see from your logs, the zed service will remove the nvme from the pool!

Good luck / Bafta !
 
Hi,
please share your VM configuration qm config 790 and output of pveversion -v. Is it maybe using passthrough?
 
  • Like
Reactions: MrPete
Hi,
please share your VM configuration qm config 790 and output of pveversion -v. Is it maybe using passthrough?
Sure... and I think you found it!!! ;)

See below. Not only is this PCIe passthrough for the nVIdia card (hostpci0: 0000:01:00,pcie=1)
but ALSO, VM 791 (also disabled) is an almost-duplicate with the same pass through.

Here's a strange thought... at the moment I am traveling and can't test the following:

WHAT IF (This is just my imagination):
a) ProxMox causes this VM to go active during backuip
b) The system is at the limit of PCIe bus devices
c) If the nVidia card is activated in a VM, then another PCIe device MUST be deactivated
d) And maybe that's NVMe #2

Could that be?

Here's...

qm config 790

agent: 1
audio0: device=ich9-intel-hda,driver=none
balloon: 8192...
bios: ovmf
boot: order=scsi0;ide2
cores: 8
cpu: host
cpuunits: 1000
description: ...
efidisk0: local:790/vm-790-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
hostpci0: 0000:01:00,pcie=1
ide0: nas-share:iso/virtio-win.iso,media=cdrom,size=612812K
ide2: nas-share:iso/Windows10.iso,media=cdrom,size=4779200K
machine: pc-q35-8.1
memory: 16384
meta: creation-qemu=8.1.5,ctime=1710629786
name: pvew10-nVidia-tmplt
net0: virtio=BC:24:11:91:12:F2,bridge=vmbr0
numa: 0
ostype: win10
scsi0: local:790/vm-790-disk-1.qcow2,cache=writethrough,discard=on,iothread=1,size=50G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=f570fc0a-53a2-4acb-89d4-494a71f90d6a
sockets: 1
vmgenid: 0875f0b0-d5b4-4cbb-be52-683cd6e66b2e
vmstatestorage: local

pveversion -v

proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-11
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-3-pve-signed: 6.5.13-3
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.15.136-1-pve: 5.15.136-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

And, here's proof (for me) that you gave me the hint I needed!

lspci

00:00.0 Host bridge: Intel Corporation Comet Lake-S 6c Host Bridge/DRAM Controller (rev 03)
00:02.0 VGA compatible controller: Intel Corporation Comet Lake-S GT2 [UHD Graphics P630] (rev 03)
00:04.0 Signal processing controller: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem (rev 03)
00:12.0 Signal processing controller: Intel Corporation Comet Lake PCH Thermal Controller
00:14.0 USB controller: Intel Corporation Comet Lake USB 3.1 xHCI Host Controller
00:14.2 RAM memory: Intel Corporation Comet Lake PCH Shared SRAM
00:16.0 Communication controller: Intel Corporation Comet Lake HECI Controller
00:17.0 SATA controller: Intel Corporation Comet Lake SATA AHCI Controller
00:1b.0 PCI bridge: Intel Corporation Comet Lake PCI Express Root Port #17 (rev f0)
00:1b.4 PCI bridge: Intel Corporation Comet Lake PCI Express Root Port #21 (rev f0)
00:1c.0 PCI bridge: Intel Corporation Comet Lake PCIe Port #6 (rev f0)
00:1c.6 PCI bridge: Intel Corporation Device 06be (rev f0)
00:1d.0 PCI bridge: Intel Corporation Comet Lake PCI Express Root Port #9 (rev f0)
00:1f.0 ISA bridge: Intel Corporation Device 0697
00:1f.3 Audio device: Intel Corporation Comet Lake PCH cAVS
00:1f.4 SMBus: Intel Corporation Comet Lake PCH SMBus Controller
00:1f.5 Serial bus controller: Intel Corporation Comet Lake PCH SPI Controller
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (11) I219-LM
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
02:00.0 Non-Volatile memory controller: Micron/Crucial Technology Device 5415 (rev 01)
03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
04:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

What is NOT in that list: my nVidia card.
 
Is it maybe using passthrough?

THAT leads to the root of the problem, combined with side effects unknown to me until now:

  1. ProxMox auto-schedules VM backups at 00:00:00 Saturday morning
  2. Those backups temporarily partly-activate all VM's even if off
  3. Apparently, that partial-activation is sufficient to activate passed-through PCIe device(s)
  4. Some hosts may have a completely full PCIe bus. If an extra device is added, then another device will disappear
On my particular host,
  1. My nVidia card was at 01:00 ... I configured pass through for that and it was working but usually not needed. Referenced in a stopped VM>
  2. Adding the second NVMe device caused the nVidia card to disappear. Second NVMe is in the same slot as the now-inactive nVidia card
  3. Result: weekly backup activates the pass-through "nVidia" and does some kind of bad stuff to the NVMe in that slot instead, causing it to go offline.
@fiona my only thought on this: could ProxMox notice one or more of the following?
  • An active block device (and member of a ZFS pool) is also configured as a PCIe pass-through device
  • A backup is about to semi-activate a VM containing a PCIe pass-through device
  • A backup is about to semi-activate a VM containing a PCIe slot that's active in a ZFS pool
I don't know whether to call this a bug. It's certainly a somewhat subtle configuration error LOL.

THANK YOU!!!
 
Glad you were able to figure it out :) I would argue that we can't detect such things in general. While possible in principle, there always will be edge cases that are not covered and there is a non-negligible maintenance burden for such code/checks to keep it working as intended without false positives/negatives.
 
@fiona I think the biggest caution for me is this:

Until now, I literally had NO idea, nor reason to believe, that a stopped VM would or could ever go active (even partially) "all by itself" in any scenario.

  • I always think of a VM as literally a virtual computer system
  • If off, it is OFF
  • Truly doesn't matter what is in the VM *.conf file -- since it's OFF
  • It is safe to reconfigure at any time -- since it's OFF
  • There can't be any side effects of a turned-off computer -- since it's OFF
    • Uses no power, no networking, no devices at all
    • Statically uses some storage space. That's about it.
Now I have learned: none of that is entirely true. Typically true yes... but overall, no.
  • Certain processes (eg driven by pvescheduler, and since I'm not an expert I have no idea what else) need to warm up each VM even if briefly, to do what they need to do.
Is there any big picture documentation about this? I have questions like...
  • Is this unique to ProxMox?
  • Is this "new" in some way?
  • Would anybody consider this a bug?
  • Are the things that can temp-activate VM's documented and/or listed anywhere?
Seems to me it is important for a SysAdmin to know what could be impacted by their work, and when.

In this case, we have learned: a misconfigured PCIe pass through in a stopped VM can harm active PCIe devices.

And we don't have unlimited time to fix it!
 
I agree it can be surprising, but it is required for the backup process of VMs, because those are done with the help of QEMU. And since we want to allow people to fully boot a VM while the backup is running, it is started with the full configuration in prelaunch mode. It is documented here: https://pve.proxmox.com/pve-docs/chapter-vzdump.html#_backup_modes
 
I agree it can be surprising, but it is required for the backup process of VMs, because those are done with the help of QEMU. And since we want to allow people to fully boot a VM while the backup is running, it is started with the full configuration in prelaunch mode. It is documented here: https://pve.proxmox.com/pve-docs/chapter-vzdump.html#_backup_modes
Ummm... that page provides rather "subtle" documentation :) ... It's a sentence within an explanatory note?! Emphasis mine:

Proxmox VE live backup provides snapshot-like semantics on any storage type. It does not require that the underlying storage supports snapshots. Also please note that since the backups are done via a background QEMU process, a stopped VM will appear as running for a short amount of time while the VM disks are being read by QEMU. However the VM itself is not booted, only its disk(s) are read.

In further searching, I don't find "prelaunch" anywhere in the Wiki, nor the Administrator's Guide. And I don't find a clear explanation of this risk anywhere in the documentation.

Here are a few documentation suggestions to assist users:
  1. Add something like the following note to the above, as a second paragraph. I'll give it in suggested context...
Proxmox VE live backup provides snapshot-like semantics on any storage type. It does not require that the underlying storage supports snapshots. Also please note that since the backups are done via a background QEMU process, a stopped VM will appear as running for a short amount of time while the VM disks are being read by QEMU. However the VM itself is not booted, only its disk(s) are read.

NOTE: Consider excluding backup of any VM node that is not fully configured. Backup activation of a stopped VM causes VM hardware to be activated briefly, even though the VM is not booted. Any configuration issue relating to PCIe or other host device passthrough can easily cause unintended host problems, such as PCIe slot/lane conflicts, etc. Backup exclusion is available in DataCenter->Backups->Edit->Selection Mode ("Exclude selected VMs")
  • Add a note about this to the Administrator's Guide. I would suggest in section 10.9 (PCIe Passthrough) at the top of the VM Configuration subhead. Perhaps something like:
NOTE: Until you have verified proper operation of PCIe Passthrough in a VM, it's best practice to exclude that VM from Proxmox VE live backup. Backup activation of a stopped VM causes VM hardware to be activated briefly, even though the VM is not booted. Any configuration issue relating to PCIe or other host device passthrough can easily cause unintended host problems, such as PCIe slot/lane conflicts, etc.
Backup exclusion is available in DataCenter->Backups->Edit->Selection Mode ("Exclude selected VMs")
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!