VMs with hugepages: 1024 do not start anymore with PVE-kernel 6.5 (and root on ZFS)

Neobin

Famous Member
Apr 29, 2021
1,903
738
123
Edit:
Not the PCIe-passthrough is the culprit, but the: hugepages: 1024; most likely in combination with root on ZFS, see:
https://forum.proxmox.com/threads/v...-anymore-on-pve-kernel-6-5.136741/post-606900
(Changed the thread title accordingly.)
/E

As requested by @dcsapak, here a new thread as follow-up of:
https://forum.proxmox.com/threads/o...le-on-test-no-subscription.135635/post-606716

Posting the infos here again for completeness:
Both of my VMs with PCIe-passthrough (which run perfectly fine on all the 6.2 kernels) can not start anymore on the 6.5 PVE-kernel. :(

One:
Bash:
Nov 20 06:08:29 pve2 pvedaemon[3220]: start VM 201: UPID:pve2:00000C94:000054EE:655AE9CD:qmstart:201:root@pam:
Nov 20 06:08:29 pve2 pvedaemon[1813]: <root@pam> starting task UPID:pve2:00000C94:000054EE:655AE9CD:qmstart:201:root@pam:
Nov 20 06:08:30 pve2 kernel: ------------[ cut here ]------------
Nov 20 06:08:30 pve2 kernel: kernel BUG at mm/migrate.c:654!
Nov 20 06:08:30 pve2 kernel: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
Nov 20 06:08:30 pve2 kernel: CPU: 10 PID: 3220 Comm: task UPID:pve2: Tainted: P           O       6.5.11-3-pve #1
Nov 20 06:08:30 pve2 kernel: Hardware name: Supermicro Super Server/X12SDV-8C-SP6F, BIOS 1.3a 07/11/2023
Nov 20 06:08:30 pve2 kernel: RIP: 0010:migrate_folio_extra+0x87/0x90
Nov 20 06:08:30 pve2 kernel: Code: 31 ff 45 31 c0 c3 cc cc cc cc e8 54 e1 ff ff 44 89 e8 5b 41 5c 41 5d 41 5e 5d 31 d2 31 c9 31 f6 31 ff 45 31 c0 c3 cc cc cc cc <0f> 0b 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90
Nov 20 06:08:30 pve2 kernel: RSP: 0018:ff57cb7f06dab768 EFLAGS: 00010282
Nov 20 06:08:30 pve2 kernel: RAX: 0017ffffc4008067 RBX: ffc2db578b1af200 RCX: 0000000000000002
Nov 20 06:08:30 pve2 kernel: RDX: ffc2db578b1af200 RSI: ffc2db578d183740 RDI: ff16271d8b653498
Nov 20 06:08:30 pve2 kernel: RBP: ff57cb7f06dab790 R08: 0000000000000000 R09: 0000000000000000
Nov 20 06:08:30 pve2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ff16271d8b653498
Nov 20 06:08:30 pve2 kernel: R13: 0000000000000002 R14: ffc2db578d183740 R15: ff57cb7f06dab95c
Nov 20 06:08:30 pve2 kernel: FS:  00007f4eed9e0b80(0000) GS:ff16275c00080000(0000) knlGS:0000000000000000
Nov 20 06:08:30 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 20 06:08:30 pve2 kernel: CR2: 00005645d3bb45f4 CR3: 0000000280fe2005 CR4: 0000000000771ee0
Nov 20 06:08:30 pve2 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 20 06:08:30 pve2 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 20 06:08:30 pve2 kernel: PKRU: 55555554
Nov 20 06:08:30 pve2 kernel: Call Trace:
Nov 20 06:08:30 pve2 kernel:  <TASK>
Nov 20 06:08:30 pve2 kernel:  ? show_regs+0x6d/0x80
Nov 20 06:08:30 pve2 kernel:  ? die+0x37/0xa0
Nov 20 06:08:30 pve2 kernel:  ? do_trap+0xd4/0xf0
Nov 20 06:08:30 pve2 kernel:  ? do_error_trap+0x71/0xb0
Nov 20 06:08:30 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 20 06:08:30 pve2 kernel:  ? exc_invalid_op+0x52/0x80
Nov 20 06:08:30 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 20 06:08:30 pve2 kernel:  ? asm_exc_invalid_op+0x1b/0x20
Nov 20 06:08:30 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 20 06:08:30 pve2 kernel:  ? move_to_new_folio+0x146/0x160
Nov 20 06:08:30 pve2 kernel:  migrate_pages_batch+0x856/0xbc0
Nov 20 06:08:30 pve2 kernel:  ? __pfx_remove_migration_pte+0x10/0x10
Nov 20 06:08:30 pve2 kernel:  ? __pfx_alloc_migration_target+0x10/0x10
Nov 20 06:08:30 pve2 kernel:  migrate_pages+0xbb6/0xd60
Nov 20 06:08:30 pve2 kernel:  ? __pfx_alloc_migration_target+0x10/0x10
Nov 20 06:08:30 pve2 kernel:  __alloc_contig_migrate_range+0xaf/0x1d0
Nov 20 06:08:30 pve2 kernel:  alloc_contig_range+0x153/0x280
Nov 20 06:08:30 pve2 kernel:  ? sysvec_apic_timer_interrupt+0xa6/0xd0
Nov 20 06:08:30 pve2 kernel:  alloc_contig_pages+0x204/0x260
Nov 20 06:08:30 pve2 kernel:  alloc_fresh_hugetlb_folio+0x70/0x1a0
Nov 20 06:08:30 pve2 kernel:  alloc_pool_huge_page+0x81/0x120
Nov 20 06:08:30 pve2 kernel:  __nr_hugepages_store_common+0x211/0x4d0
Nov 20 06:08:30 pve2 kernel:  nr_hugepages_store+0x92/0xa0
Nov 20 06:08:30 pve2 kernel:  kobj_attr_store+0xf/0x40
Nov 20 06:08:30 pve2 kernel:  sysfs_kf_write+0x3b/0x60
Nov 20 06:08:30 pve2 kernel:  kernfs_fop_write_iter+0x130/0x210
Nov 20 06:08:30 pve2 kernel:  vfs_write+0x251/0x440
Nov 20 06:08:30 pve2 kernel:  ksys_write+0x73/0x100
Nov 20 06:08:30 pve2 kernel:  __x64_sys_write+0x19/0x30
Nov 20 06:08:30 pve2 kernel:  do_syscall_64+0x58/0x90
Nov 20 06:08:30 pve2 kernel:  ? handle_mm_fault+0xad/0x360
Nov 20 06:08:30 pve2 kernel:  ? exit_to_user_mode_prepare+0x39/0x190
Nov 20 06:08:30 pve2 kernel:  ? irqentry_exit_to_user_mode+0x17/0x20
Nov 20 06:08:30 pve2 kernel:  ? irqentry_exit+0x43/0x50
Nov 20 06:08:30 pve2 kernel:  ? exc_page_fault+0x94/0x1b0
Nov 20 06:08:30 pve2 kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Nov 20 06:08:30 pve2 kernel: RIP: 0033:0x7f4eedb16140
Nov 20 06:08:30 pve2 kernel: Code: 40 00 48 8b 15 c1 9c 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 80 3d a1 24 0e 00 00 74 17 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 48 83 ec 28 48 89
Nov 20 06:08:30 pve2 kernel: RSP: 002b:00007ffe48954218 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
Nov 20 06:08:30 pve2 kernel: RAX: ffffffffffffffda RBX: 00005645d26ae2a0 RCX: 00007f4eedb16140
Nov 20 06:08:30 pve2 kernel: RDX: 0000000000000003 RSI: 00005645da0fe890 RDI: 0000000000000011
Nov 20 06:08:30 pve2 kernel: RBP: 00005645da0fe890 R08: 0000000000000000 R09: 00007f4eedbf0d10
Nov 20 06:08:30 pve2 kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000003
Nov 20 06:08:30 pve2 kernel: R13: 00005645d26ae2a0 R14: 0000000000000011 R15: 00005645da0f98e0
Nov 20 06:08:30 pve2 kernel:  </TASK>
Nov 20 06:08:30 pve2 kernel: Modules linked in: ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter sctp ip6_udp_tunnel udp_tunnel nf_tables sunrpc binfmt_misc bonding tls nfnetlink_log nfnetlink intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common i10nm_edac nfit x86_pkg_temp_thermal intel_powerclamp kvm_intel ipmi_ssif kvm nouveau snd_hda_intel snd_intel_dspcfg crct10dif_pclmul snd_intel_sdw_acpi polyval_clmulni snd_hda_codec polyval_generic irdma mxm_wmi ghash_clmulni_intel drm_ttm_helper aesni_intel snd_hda_core ttm snd_hwdep crypto_simd i40e drm_display_helper cryptd snd_pcm cmdlinepart rapl cec ib_uverbs snd_timer dax_hmem rc_core ast cxl_acpi spi_nor intel_cstate snd video intel_th_gth drm_shmem_helper mei_me isst_if_mmio isst_if_mbox_pci ib_core soundcore cxl_core wmi mtd pcspkr intel_th_pci drm_kms_helper isst_if_common mei intel_th acpi_ipmi ioatdma intel_vsec ipmi_si ipmi_devintf ipmi_msghandler acpi_pad acpi_power_meter joydev
Nov 20 06:08:30 pve2 kernel:  input_leds mac_hid vhost_net vhost vhost_iotlb tap coretemp drm efi_pstore dmi_sysfs ip_tables x_tables autofs4 zfs(PO) spl(O) btrfs blake2b_generic xor raid6_pq libcrc32c hid_generic usbmouse usbhid hid mpt3sas vfio_pci vfio_pci_core irqbypass vfio_iommu_type1 vfio ice iommufd xhci_pci sdhci_pci nvme raid_class xhci_pci_renesas crc32_pclmul scsi_transport_sas igb xhci_hcd cqhci spi_intel_pci gnss i2c_i801 nvme_core spi_intel sdhci i2c_smbus ahci i2c_algo_bit nvme_common i2c_ismt dca libahci pinctrl_cedarfork
Nov 20 06:08:30 pve2 kernel: ---[ end trace 0000000000000000 ]---
Nov 20 06:08:30 pve2 kernel: RIP: 0010:migrate_folio_extra+0x87/0x90
Nov 20 06:08:30 pve2 kernel: Code: 31 ff 45 31 c0 c3 cc cc cc cc e8 54 e1 ff ff 44 89 e8 5b 41 5c 41 5d 41 5e 5d 31 d2 31 c9 31 f6 31 ff 45 31 c0 c3 cc cc cc cc <0f> 0b 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90
Nov 20 06:08:30 pve2 kernel: RSP: 0018:ff57cb7f06dab768 EFLAGS: 00010282
Nov 20 06:08:30 pve2 kernel: RAX: 0017ffffc4008067 RBX: ffc2db578b1af200 RCX: 0000000000000002
Nov 20 06:08:30 pve2 kernel: RDX: ffc2db578b1af200 RSI: ffc2db578d183740 RDI: ff16271d8b653498
Nov 20 06:08:30 pve2 kernel: RBP: ff57cb7f06dab790 R08: 0000000000000000 R09: 0000000000000000
Nov 20 06:08:30 pve2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ff16271d8b653498
Nov 20 06:08:30 pve2 kernel: R13: 0000000000000002 R14: ffc2db578d183740 R15: ff57cb7f06dab95c
Nov 20 06:08:30 pve2 kernel: FS:  00007f4eed9e0b80(0000) GS:ff16275c00080000(0000) knlGS:0000000000000000
Nov 20 06:08:30 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 20 06:08:30 pve2 kernel: CR2: 00005645d3bb45f4 CR3: 0000000280fe2005 CR4: 0000000000771ee0
Nov 20 06:08:30 pve2 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 20 06:08:30 pve2 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 20 06:08:30 pve2 kernel: PKRU: 55555554
Nov 20 06:08:30 pve2 pvedaemon[1813]: <root@pam> end task UPID:pve2:00000C94:000054EE:655AE9CD:qmstart:201:root@pam: unable to read tail (got 0 bytes)
Bash:
agent: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
efidisk0: local-zfs:vm-201-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:16:00,pcie=1
hugepages: 1024
ide2: none,media=cdrom
machine: q35
memory: 131072
meta: creation-qemu=7.0.0,ctime=1665967836
name: TrueNAS
net0: virtio=[...],bridge=vmbr0
numa: 1
ostype: other
scsi0: local-zfs:vm-201-disk-1,discard=on,iothread=1,size=16G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=[...]
sockets: 1
startup: order=2,up=120
vmgenid: [...]
Bash:
16:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI Fusion-MPT 12GSAS/PCIe Secure SAS38xx [1000:00e6]
        Subsystem: Broadcom / LSI 9500-16i Tri-Mode HBA [1000:4050]
        Kernel driver in use: vfio-pci
        Kernel modules: mpt3sas
Bash:
softdep mpt3sas pre: vfio-pci
softdep nouveau pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci
options vfio-pci ids=1000:00e6,10de:1fb0,10de:10fa
Bash:
proxmox-ve: 8.0.2 (running kernel: 6.5.11-3-pve)
pve-manager: 8.0.9 (running version: 8.0.9/fd1a0ae1b385cdcd)
proxmox-kernel-helper: 8.0.5
proxmox-kernel-6.5: 6.5.11-3
proxmox-kernel-6.5.11-3-pve: 6.5.11-3
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
ceph-fuse: 18.2.0-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx6
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.6
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.10
libpve-guest-common-perl: 5.0.5
libpve-http-server-perl: 5.0.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.4
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.4-1
proxmox-backup-file-restore: 3.0.4-1
proxmox-kernel-helper: 8.0.5
proxmox-mail-forward: 0.2.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.1.1
pve-cluster: 8.0.5
pve-container: 5.0.5
pve-docs: 8.0.5
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.0.7
pve-qemu-kvm: 8.1.2-2
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve3
 
Last edited:
Two:
Bash:
Nov 20 05:49:31 pve2 pve-guests[1806]: start VM 202: UPID:pve2:0000070E:00000816:655AE55B:qmstart:202:root@pam:
Nov 20 05:49:31 pve2 pve-guests[1805]: <root@pam> starting task UPID:pve2:0000070E:00000816:655AE55B:qmstart:202:root@pam:
Nov 20 05:49:31 pve2 kernel: kvm[1807]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Nov 20 05:49:32 pve2 kernel: ------------[ cut here ]------------
Nov 20 05:49:32 pve2 kernel: kernel BUG at mm/migrate.c:654!
Nov 20 05:49:32 pve2 kernel: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
Nov 20 05:49:32 pve2 kernel: CPU: 4 PID: 1806 Comm: task UPID:pve2: Tainted: P           O       6.5.11-3-pve #1
Nov 20 05:49:32 pve2 kernel: Hardware name: Supermicro Super Server/X12SDV-8C-SP6F, BIOS 1.3a 07/11/2023
Nov 20 05:49:32 pve2 kernel: RIP: 0010:migrate_folio_extra+0x87/0x90
Nov 20 05:49:32 pve2 kernel: Code: 31 ff 45 31 c0 c3 cc cc cc cc e8 54 e1 ff ff 44 89 e8 5b 41 5c 41 5d 41 5e 5d 31 d2 31 c9 31 f6 31 ff 45 31 c0 c3 cc cc cc cc <0f> 0b 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90
Nov 20 05:49:32 pve2 kernel: RSP: 0018:ff4a1a51a070f7d8 EFLAGS: 00010282
Nov 20 05:49:32 pve2 kernel: RAX: 0017ffffc0008025 RBX: ff9a04388b015e40 RCX: 0000000000000002
Nov 20 05:49:32 pve2 kernel: RDX: ff9a04388b015e40 RSI: ff9a04388a69f680 RDI: ff2f9f333b25e5c0
Nov 20 05:49:32 pve2 kernel: RBP: ff4a1a51a070f800 R08: 0000000000000000 R09: 0000000000000000
Nov 20 05:49:32 pve2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ff2f9f333b25e5c0
Nov 20 05:49:32 pve2 kernel: R13: 0000000000000002 R14: ff9a04388a69f680 R15: ff4a1a51a070f9cc
Nov 20 05:49:32 pve2 kernel: FS:  00007fe5db4eeb80(0000) GS:ff2f9f717ff00000(0000) knlGS:0000000000000000
Nov 20 05:49:32 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 20 05:49:32 pve2 kernel: CR2: 000055c6329cf300 CR3: 000000028abce005 CR4: 0000000000771ee0
Nov 20 05:49:32 pve2 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 20 05:49:32 pve2 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 20 05:49:32 pve2 kernel: PKRU: 55555554
Nov 20 05:49:32 pve2 kernel: Call Trace:
Nov 20 05:49:32 pve2 kernel:  <TASK>
Nov 20 05:49:32 pve2 kernel:  ? show_regs+0x6d/0x80
Nov 20 05:49:32 pve2 kernel:  ? die+0x37/0xa0
Nov 20 05:49:32 pve2 kernel:  ? do_trap+0xd4/0xf0
Nov 20 05:49:32 pve2 kernel:  ? do_error_trap+0x71/0xb0
Nov 20 05:49:32 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 20 05:49:32 pve2 kernel:  ? exc_invalid_op+0x52/0x80
Nov 20 05:49:32 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 20 05:49:32 pve2 kernel:  ? asm_exc_invalid_op+0x1b/0x20
Nov 20 05:49:32 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 20 05:49:32 pve2 kernel:  ? move_to_new_folio+0x146/0x160
Nov 20 05:49:32 pve2 kernel:  migrate_pages_batch+0x856/0xbc0
Nov 20 05:49:32 pve2 kernel:  ? __pfx_remove_migration_pte+0x10/0x10
Nov 20 05:49:32 pve2 kernel:  ? __pfx_alloc_migration_target+0x10/0x10
Nov 20 05:49:32 pve2 kernel:  migrate_pages+0xbb6/0xd60
Nov 20 05:49:32 pve2 kernel:  ? __pfx_alloc_migration_target+0x10/0x10
Nov 20 05:49:32 pve2 kernel:  __alloc_contig_migrate_range+0xaf/0x1d0
Nov 20 05:49:32 pve2 kernel:  alloc_contig_range+0x153/0x280
Nov 20 05:49:32 pve2 kernel:  ? sysvec_apic_timer_interrupt+0xa6/0xd0
Nov 20 05:49:32 pve2 kernel:  alloc_contig_pages+0x204/0x260
Nov 20 05:49:32 pve2 kernel:  alloc_fresh_hugetlb_folio+0x70/0x1a0
Nov 20 05:49:32 pve2 kernel:  alloc_pool_huge_page+0x81/0x120
Nov 20 05:49:32 pve2 kernel:  __nr_hugepages_store_common+0x211/0x4d0
Nov 20 05:49:32 pve2 kernel:  nr_hugepages_store+0x92/0xa0
Nov 20 05:49:32 pve2 kernel:  kobj_attr_store+0xf/0x40
Nov 20 05:49:32 pve2 kernel:  sysfs_kf_write+0x3b/0x60
Nov 20 05:49:32 pve2 kernel:  kernfs_fop_write_iter+0x130/0x210
Nov 20 05:49:32 pve2 kernel:  vfs_write+0x251/0x440
Nov 20 05:49:32 pve2 kernel:  ksys_write+0x73/0x100
Nov 20 05:49:32 pve2 kernel:  __x64_sys_write+0x19/0x30
Nov 20 05:49:32 pve2 kernel:  do_syscall_64+0x58/0x90
Nov 20 05:49:32 pve2 kernel:  ? irqentry_exit+0x43/0x50
Nov 20 05:49:32 pve2 kernel:  ? exc_page_fault+0x94/0x1b0
Nov 20 05:49:32 pve2 kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Nov 20 05:49:32 pve2 kernel: RIP: 0033:0x7fe5db624140
Nov 20 05:49:32 pve2 kernel: Code: 40 00 48 8b 15 c1 9c 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 80 3d a1 24 0e 00 00 74 17 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 48 83 ec 28 48 89
Nov 20 05:49:32 pve2 kernel: RSP: 002b:00007fffc59ed018 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
Nov 20 05:49:32 pve2 kernel: RAX: ffffffffffffffda RBX: 000055c62da552a0 RCX: 00007fe5db624140
Nov 20 05:49:32 pve2 kernel: RDX: 0000000000000001 RSI: 000055c634d59bf0 RDI: 0000000000000010
Nov 20 05:49:32 pve2 kernel: RBP: 000055c634d59bf0 R08: 0000000000000000 R09: 000000000000010f
Nov 20 05:49:32 pve2 kernel: R10: 03a94646a4066c37 R11: 0000000000000202 R12: 0000000000000001
Nov 20 05:49:32 pve2 kernel: R13: 000055c62da552a0 R14: 0000000000000010 R15: 000055c634d52c20
Nov 20 05:49:32 pve2 kernel:  </TASK>
Nov 20 05:49:32 pve2 kernel: Modules linked in: ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter sctp ip6_udp_tunnel udp_tunnel nf_tables sunrpc binfmt_misc bonding tls nfnetlink_log nfnetlink intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common i10nm_edac nfit x86_pkg_temp_thermal intel_powerclamp kvm_intel ipmi_ssif kvm nouveau snd_hda_intel snd_intel_dspcfg snd_intel_sdw_acpi irdma crct10dif_pclmul polyval_clmulni snd_hda_codec mxm_wmi drm_ttm_helper polyval_generic ttm ghash_clmulni_intel aesni_intel i40e snd_hda_core crypto_simd drm_display_helper cryptd snd_hwdep snd_pcm cec ib_uverbs cmdlinepart dax_hmem rc_core ast rapl snd_timer cxl_acpi video drm_shmem_helper intel_th_gth spi_nor intel_cstate snd cxl_core isst_if_mmio isst_if_mbox_pci ib_core pcspkr drm_kms_helper wmi mei_me soundcore mtd acpi_ipmi intel_th_pci isst_if_common mei intel_th ipmi_si intel_vsec ipmi_devintf ipmi_msghandler acpi_pad joydev input_leds ioatdma
Nov 20 05:49:32 pve2 kernel:  acpi_power_meter mac_hid vhost_net vhost vhost_iotlb tap coretemp drm efi_pstore dmi_sysfs ip_tables x_tables autofs4 zfs(PO) spl(O) btrfs blake2b_generic xor raid6_pq libcrc32c hid_generic usbmouse usbhid hid mpt3sas vfio_pci vfio_pci_core irqbypass vfio_iommu_type1 vfio xhci_pci sdhci_pci iommufd xhci_pci_renesas nvme ice crc32_pclmul igb raid_class cqhci i2c_i801 xhci_hcd nvme_core gnss scsi_transport_sas spi_intel_pci sdhci i2c_smbus ahci i2c_algo_bit spi_intel nvme_common i2c_ismt dca libahci pinctrl_cedarfork
Nov 20 05:49:32 pve2 kernel: ---[ end trace 0000000000000000 ]---
Nov 20 05:49:32 pve2 kernel: RIP: 0010:migrate_folio_extra+0x87/0x90
Nov 20 05:49:32 pve2 kernel: Code: 31 ff 45 31 c0 c3 cc cc cc cc e8 54 e1 ff ff 44 89 e8 5b 41 5c 41 5d 41 5e 5d 31 d2 31 c9 31 f6 31 ff 45 31 c0 c3 cc cc cc cc <0f> 0b 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90
Nov 20 05:49:32 pve2 kernel: RSP: 0018:ff4a1a51a070f7d8 EFLAGS: 00010282
Nov 20 05:49:32 pve2 kernel: RAX: 0017ffffc0008025 RBX: ff9a04388b015e40 RCX: 0000000000000002
Nov 20 05:49:32 pve2 kernel: RDX: ff9a04388b015e40 RSI: ff9a04388a69f680 RDI: ff2f9f333b25e5c0
Nov 20 05:49:32 pve2 kernel: RBP: ff4a1a51a070f800 R08: 0000000000000000 R09: 0000000000000000
Nov 20 05:49:32 pve2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ff2f9f333b25e5c0
Nov 20 05:49:32 pve2 kernel: R13: 0000000000000002 R14: ff9a04388a69f680 R15: ff4a1a51a070f9cc
Nov 20 05:49:32 pve2 kernel: FS:  00007fe5db4eeb80(0000) GS:ff2f9f717ff00000(0000) knlGS:0000000000000000
Nov 20 05:49:32 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 20 05:49:32 pve2 kernel: CR2: 000055c6329cf300 CR3: 000000028abce005 CR4: 0000000000771ee0
Nov 20 05:49:32 pve2 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 20 05:49:32 pve2 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 20 05:49:32 pve2 kernel: PKRU: 55555554
Nov 20 05:49:32 pve2 pvesh[1804]: Starting VM 202 failed: unable to read tail (got 0 bytes)
Bash:
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
efidisk0: local-zfs:vm-202-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:15:00,pcie=1
hugepages: 1024
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=8.0.2,ctime=1689743256
name: Jellyfin
net0: virtio=[...],bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-zfs:vm-202-disk-1,discard=on,iothread=1,size=64G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=[...]
sockets: 1
vmgenid: [...]
Bash:
15:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU117GLM [Quadro T1000 Mobile] [10de:1fb0] (rev a1)
        Subsystem: NVIDIA Corporation TU117GLM [Quadro T1000 Mobile] [10de:12db]
        Kernel driver in use: vfio-pci
        Kernel modules: nvidiafb, nouveau
15:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
        Subsystem: NVIDIA Corporation Device [10de:12db]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel
 
just to reference it here too (originally in the 6.5 thread):


Both of my VMs with PCIe-passthrough (which run perfectly fine on all the 6.2 kernels) can not start anymore on the 6.5 kernel. :(
can you maybe open a new thread, i'll see if i can reproduce it here

EDIT:

generally passthrough works here (tested on a consumer amd board, and a older intel server mainboard)

on a hunch: the error message looks more memory than passthrough related, could you maybe try without hugepages ?
 
  • Like
Reactions: Neobin
I have no real solution, just a little bit experience over the years: I stopped trying to do pci-e passthrough for good. First, it was hard to get a combination of mobo and cards that worked reliably, then it was just too frustrating fiddling around after each update of firmware or PVE kernel update. Sometimes I neede restarts after each passthrough, then it just failed constantly. The last final nail in the coffin was that a new kernel just dropped support for my older hardware that killed pci-e passthrough all together and I couldn't bare it anymore.

For the things I needed pasthrough (usb and gpu), I just use LX(C) containers and bind-mount everything there and am happy that it still works after an update.
 
  • Like
Reactions: Neobin
I am running ZFS on root as well as one VM on ZFS and the other VM on EXT4...no problems, just to share...

But with hugepages enabled? And FWICT, it might also need encryption on the root dataset – but just guesstimating here.
 
Are you running ZFS on root?

Seems that there are some issues with that and huge pages being flushed to disk in more recent (6.3+) kernels, e.g., see:
https://bugzilla.kernel.org/show_bug.cgi?id=217747
https://github.com/openzfs/zfs/issues/15140

Yes. Thanks for the references.

And FWICT, it might also need encryption on the root dataset – but just guesstimating here.

No encryption at all here on my side.

on a hunch: the error message looks more memory than passthrough related, could you maybe try without hugepages ?

You both are totally right.
It is not the PCIe-passthrough at all, but the: hugepages: 1024.

Both VMs with PCIe-passthrough, but without: hugepages: 1024 work fine.
A test VM without PCIe-passthrough, but with: hugepages: 1024 does not start:
Bash:
Nov 21 05:18:51 pve2 pvedaemon[21410]: start VM 500: UPID:pve2:000053A2:0004FBAD:655C2FAB:qmstart:500:root@pam:
Nov 21 05:18:51 pve2 pvedaemon[1780]: <root@pam> starting task UPID:pve2:000053A2:0004FBAD:655C2FAB:qmstart:500:root@pam:
Nov 21 05:18:52 pve2 kernel: ------------[ cut here ]------------
Nov 21 05:18:52 pve2 kernel: kernel BUG at mm/migrate.c:654!
Nov 21 05:18:52 pve2 kernel: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
Nov 21 05:18:52 pve2 kernel: CPU: 2 PID: 21410 Comm: task UPID:pve2: Tainted: P           O       6.5.11-3-pve #1
Nov 21 05:18:52 pve2 kernel: Hardware name: Supermicro Super Server/X12SDV-8C-SP6F, BIOS 1.3a 07/11/2023
Nov 21 05:18:52 pve2 kernel: RIP: 0010:migrate_folio_extra+0x87/0x90
Nov 21 05:18:52 pve2 kernel: Code: 31 ff 45 31 c0 c3 cc cc cc cc e8 54 e1 ff ff 44 89 e8 5b 41 5c 41 5d 41 5e 5d 31 d2 31 c9 31 f6 31 ff 45 31 c0 c3 cc cc cc cc <0f> 0b 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90
Nov 21 05:18:52 pve2 kernel: RSP: 0018:ff4aa6f7065e3738 EFLAGS: 00010282
Nov 21 05:18:52 pve2 kernel: RAX: 0017ffffc0008025 RBX: ffdbba734b2b4580 RCX: 0000000000000002
Nov 21 05:18:52 pve2 kernel: RDX: ffdbba734b2b4580 RSI: ffdbba73d10c9000 RDI: ff37d3233207eeb0
Nov 21 05:18:52 pve2 kernel: RBP: ff4aa6f7065e3760 R08: 0000000000000000 R09: 0000000000000000
Nov 21 05:18:52 pve2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ff37d3233207eeb0
Nov 21 05:18:52 pve2 kernel: R13: 0000000000000002 R14: ffdbba73d10c9000 R15: ff4aa6f7065e392c
Nov 21 05:18:52 pve2 kernel: FS:  00007f77a8d1ab80(0000) GS:ff37d3617fe80000(0000) knlGS:0000000000000000
Nov 21 05:18:52 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 21 05:18:52 pve2 kernel: CR2: 00007f251c189420 CR3: 000000010add4001 CR4: 0000000000771ee0
Nov 21 05:18:52 pve2 kernel: PKRU: 55555554
Nov 21 05:18:52 pve2 kernel: Call Trace:
Nov 21 05:18:52 pve2 kernel:  <TASK>
Nov 21 05:18:52 pve2 kernel:  ? show_regs+0x6d/0x80
Nov 21 05:18:52 pve2 kernel:  ? die+0x37/0xa0
Nov 21 05:18:52 pve2 kernel:  ? do_trap+0xd4/0xf0
Nov 21 05:18:52 pve2 kernel:  ? do_error_trap+0x71/0xb0
Nov 21 05:18:52 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 21 05:18:52 pve2 kernel:  ? exc_invalid_op+0x52/0x80
Nov 21 05:18:52 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 21 05:18:52 pve2 kernel:  ? asm_exc_invalid_op+0x1b/0x20
Nov 21 05:18:52 pve2 kernel:  ? migrate_folio_extra+0x87/0x90
Nov 21 05:18:52 pve2 kernel:  ? move_to_new_folio+0x146/0x160
Nov 21 05:18:52 pve2 kernel:  migrate_pages_batch+0x856/0xbc0
Nov 21 05:18:52 pve2 kernel:  ? __pfx_remove_migration_pte+0x10/0x10
Nov 21 05:18:52 pve2 kernel:  ? __pfx_alloc_migration_target+0x10/0x10
Nov 21 05:18:52 pve2 kernel:  migrate_pages+0xbb6/0xd60
Nov 21 05:18:52 pve2 kernel:  ? __pfx_alloc_migration_target+0x10/0x10
Nov 21 05:18:52 pve2 kernel:  __alloc_contig_migrate_range+0xaf/0x1d0
Nov 21 05:18:52 pve2 kernel:  alloc_contig_range+0x153/0x280
Nov 21 05:18:52 pve2 kernel:  ? sysvec_apic_timer_interrupt+0xa6/0xd0
Nov 21 05:18:52 pve2 kernel:  alloc_contig_pages+0x204/0x260
Nov 21 05:18:52 pve2 kernel:  alloc_fresh_hugetlb_folio+0x70/0x1a0
Nov 21 05:18:52 pve2 kernel:  alloc_pool_huge_page+0x81/0x120
Nov 21 05:18:52 pve2 kernel:  __nr_hugepages_store_common+0x211/0x4d0
Nov 21 05:18:52 pve2 kernel:  nr_hugepages_store+0x92/0xa0
Nov 21 05:18:52 pve2 kernel:  kobj_attr_store+0xf/0x40
Nov 21 05:18:52 pve2 kernel:  sysfs_kf_write+0x3b/0x60
Nov 21 05:18:52 pve2 kernel:  kernfs_fop_write_iter+0x130/0x210
Nov 21 05:18:52 pve2 kernel:  vfs_write+0x251/0x440
Nov 21 05:18:52 pve2 kernel:  ksys_write+0x73/0x100
Nov 21 05:18:52 pve2 kernel:  __x64_sys_write+0x19/0x30
Nov 21 05:18:52 pve2 kernel:  do_syscall_64+0x58/0x90
Nov 21 05:18:52 pve2 kernel:  ? exit_to_user_mode_prepare+0x39/0x190
Nov 21 05:18:52 pve2 kernel:  ? syscall_exit_to_user_mode+0x37/0x60
Nov 21 05:18:52 pve2 kernel:  ? do_syscall_64+0x67/0x90
Nov 21 05:18:52 pve2 kernel:  ? irqentry_exit_to_user_mode+0x17/0x20
Nov 21 05:18:52 pve2 kernel:  ? irqentry_exit+0x43/0x50
Nov 21 05:18:52 pve2 kernel:  ? exc_page_fault+0x94/0x1b0
Nov 21 05:18:52 pve2 kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Nov 21 05:18:52 pve2 kernel: RIP: 0033:0x7f77a8e50140
Nov 21 05:18:52 pve2 kernel: Code: 40 00 48 8b 15 c1 9c 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 80 3d a1 24 0e 00 00 74 17 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 58 c3 0f 1f 80 00 00 00 00 48 83 ec 28 48 89
Nov 21 05:18:52 pve2 kernel: RSP: 002b:00007ffd5096d6a8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001
Nov 21 05:18:52 pve2 kernel: RAX: ffffffffffffffda RBX: 000055c30e2292a0 RCX: 00007f77a8e50140
Nov 21 05:18:52 pve2 kernel: RDX: 0000000000000001 RSI: 000055c315cd8540 RDI: 0000000000000012
Nov 21 05:18:52 pve2 kernel: RBP: 000055c315cd8540 R08: 0000000000000000 R09: 00007f77a8f2ad10
Nov 21 05:18:52 pve2 kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000001
Nov 21 05:18:52 pve2 kernel: R13: 000055c30e2292a0 R14: 0000000000000012 R15: 000055c315cd03c0
Nov 21 05:18:52 pve2 kernel:  </TASK>
Nov 21 05:18:52 pve2 kernel: Modules linked in: tcp_diag inet_diag ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter sctp ip6_udp_tunnel udp_tunnel nf_tables sunrpc binfmt_misc bonding tls nfnetlink_log nfnetlink intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common i10nm_edac nfit x86_pkg_temp_thermal intel_powerclamp kvm_intel ipmi_ssif nouveau kvm snd_hda_intel crct10dif_pclmul snd_intel_dspcfg polyval_clmulni snd_intel_sdw_acpi polyval_generic mxm_wmi snd_hda_codec ghash_clmulni_intel drm_ttm_helper irdma aesni_intel ttm snd_hda_core crypto_simd snd_hwdep drm_display_helper cryptd i40e snd_pcm rapl snd_timer cec ib_uverbs dax_hmem snd rc_core ast cxl_acpi cmdlinepart drm_shmem_helper video intel_th_gth intel_cstate ib_core cxl_core spi_nor soundcore pcspkr wmi mei_me drm_kms_helper isst_if_mbox_pci isst_if_mmio intel_th_pci isst_if_common mtd mei intel_th acpi_ipmi ioatdma intel_vsec ipmi_si ipmi_devintf ipmi_msghandler acpi_pad
Nov 21 05:18:52 pve2 kernel:  acpi_power_meter joydev input_leds mac_hid vhost_net vhost vhost_iotlb tap drm coretemp efi_pstore dmi_sysfs ip_tables x_tables autofs4 zfs(PO) spl(O) btrfs blake2b_generic xor raid6_pq libcrc32c hid_generic usbmouse usbhid hid mpt3sas vfio_pci vfio_pci_core irqbypass vfio_iommu_type1 vfio xhci_pci sdhci_pci iommufd nvme xhci_pci_renesas ice crc32_pclmul igb raid_class xhci_hcd cqhci nvme_core i2c_i801 spi_intel_pci gnss scsi_transport_sas sdhci spi_intel i2c_smbus i2c_algo_bit ahci nvme_common i2c_ismt libahci dca pinctrl_cedarfork
Nov 21 05:18:52 pve2 kernel: ---[ end trace 0000000000000000 ]---
Nov 21 05:18:52 pve2 kernel: RIP: 0010:migrate_folio_extra+0x87/0x90
Nov 21 05:18:52 pve2 kernel: Code: 31 ff 45 31 c0 c3 cc cc cc cc e8 54 e1 ff ff 44 89 e8 5b 41 5c 41 5d 41 5e 5d 31 d2 31 c9 31 f6 31 ff 45 31 c0 c3 cc cc cc cc <0f> 0b 0f 1f 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90
Nov 21 05:18:52 pve2 kernel: RSP: 0018:ff4aa6f7065e3738 EFLAGS: 00010282
Nov 21 05:18:52 pve2 kernel: RAX: 0017ffffc0008025 RBX: ffdbba734b2b4580 RCX: 0000000000000002
Nov 21 05:18:52 pve2 kernel: RDX: ffdbba734b2b4580 RSI: ffdbba73d10c9000 RDI: ff37d3233207eeb0
Nov 21 05:18:52 pve2 kernel: RBP: ff4aa6f7065e3760 R08: 0000000000000000 R09: 0000000000000000
Nov 21 05:18:52 pve2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ff37d3233207eeb0
Nov 21 05:18:52 pve2 kernel: R13: 0000000000000002 R14: ffdbba73d10c9000 R15: ff4aa6f7065e392c
Nov 21 05:18:52 pve2 kernel: FS:  00007f77a8d1ab80(0000) GS:ff37d3617fe80000(0000) knlGS:0000000000000000
Nov 21 05:18:52 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 21 05:18:52 pve2 kernel: CR2: 00007f251c189420 CR3: 000000010add4001 CR4: 0000000000771ee0
Nov 21 05:18:52 pve2 kernel: PKRU: 55555554
Nov 21 05:18:52 pve2 pvedaemon[1780]: <root@pam> end task UPID:pve2:000053A2:0004FBAD:655C2FAB:qmstart:500:root@pam: unable to read tail (got 0 bytes)

Thank you so much for the lightning fast support. :) Love you. :cool:
 
  • Like
Reactions: janssensm
Thanks for the feedback!

Interestingly a colleague could not reproduce this yet with ZFS and hugepage on, but FWICT the issue might be a bit racy and possibly depend on the amount of memory available, e.g., something actually needs to trigger the writeback with a huge page involved.
The question here is if it's really a bug in how ZFS interacts with the kernel (i.e., the new "folio" rework for memory management) or if the kernel itself does an odd thing that is only (or more likely) to be exposed when using ZFS.

But in any way, there sadly doesn't seem to be a workaround available, so I think we have to add this as a known issue for now until we, or somebody else, get around to look more closely into that.
 
I personally am fine with not using hugepages; not a huge ( :cool: ) deal, I guess.

To give some info to my test with the above mentioned test VM 500:
  • The VM config was almost the same as the config from the VM 202 above. Of course, without the PCIe-passthrough. It had also 8 GB of memory assigned. vDisk and EFI-disk also on: local-zfs.
  • The host has 256 GB RAM.
  • ZFS ARC is limited to 32 GB.
  • I have not reserved or configured hugepages in any way through the kernel cmd line.
  • Both of them (host RAM and ZFS ARC) where almost completely free at the start of the test VM 500, since I need(ed) to reboot the host every time after a failed VM start, because it becomes / then is unstable/unresponsive to a point where a e.g.: systemctl status XYZ hangs and also on reboot/shutdown systemd processes/tasks run into timeouts. Sometimes I even needed to hard reset the host, because the: pve-guests.service has no timeout and runs infinite. In short: Host was freshly booted up. Nothing else / no guest was running.
  • Test VM 500 was started manually through the webUI some time after the host booted up. So, was not set to autostart, of course.
  • Start error on the freshly created (with an quick install of Debian inside) test VM 500 came up immediately on the first try; with: hugepages: 1024, of course.
 
Thanks to your description I could reproduce this bug. First I could not reproduce it since my zfs_arc_max was set to 16GB.
Could you try decreasing your zfs_arc_max and see if this problem still persists?
 
  • Like
Reactions: Neobin
Unfortunately, changing the: zfs_arc_max does not help here for me. The VM start errors out every time with the same known error.

zfs_arc_max values I tested and verified each time with: arc_summary:
  • 32 GiB (34359738368)
  • 16 GiB (17179869184)
  • 8 GiB (8589934592)
  • 4 GiB (4294967296) (with: zfs_arc_min=2147483648)
  • 125.6 GiB (no manually limit set at all / default 50%)

Each value was set every time permanently through: /etc/modprobe.d/zfs.conf (followed by a: update-initramfs -u -k all, of course), since I needed to reboot after every failed VM start attempt anyway. (For the: no manually limit set at all / default 50% test, /etc/modprobe.d/zfs.conf was removed completely.)
 
FWIW I tried to trigger the issue on a older but up-to-date system, with a fresh vm with similar config as posted, but was not able to trigger it, regardless arc size or zfs encryption. It has zfs on root.
However: the command as posted in the already mentioned openzfs bugreport triggers a crash and unresponsive system every time it's run, without any vm running:
sync; echo 3 > /proc/sys/vm/drop_caches; echo 1 > /proc/sys/vm/compact_memory
 
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!