Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

Sadly on my Topton SFF PC with Intel N6005 VMs wont boot on 6.1 kernel (same problems on unofficial 6.0-edge kernels). On 5.19 no problem. VMs are not able to boot from local-lvm storage (nvme-ssd). Its possible to create new VM run installation until vhdd partitioning, but then it fails.

i get error like this: Could not read L1 table: Invalid argument TASK ERROR: start failed: QEMU exited with code 1
Way back in the beginning, I had Proxmox installed as ext4. Later I installed as ZFS (single drive). Things have been more stable, but I've done alot of things here and there so I cant say that helped or not. But maybe try a ZFS install and see what happens? I'm using the 6.1 kernel btw.
 
When checking this issue with PBS backup/restore and the opt-in 6.1 kernel we managed to reproduce it on some setups.
That would be those with /tmp located on ZFS (normal if whole root file system is on ZFS).
There, the open call with the O_TMPFILE flag set, for downloading the previous backup index for incremental backup, fails with EOPNOTSUPP 95 Operation not supported.

It seems ZFS 2.1.7 received still misses some compat with the 6.1 kernel, which reworked parts of the VFS layer w.r.t. tempfile handling. We notified ZFS upstream with a minimal reproducer for now and will look into providing a stop-gap fix for this if upstream needs more time to handle it as they deem correctly.

Until then we recommend to either avoid the initial pve-kernel-6.1.0-1-pve package if your having your root filesystem on ZFS, or move the /tmp directory away from ZFS. e.g., by making it a tmpfs mount.

adapt to 6.1 changes for open syscall with TMPFILE option
https://git.proxmox.com/?p=zfsonlinux.git;a=commit;h=85a3ff856de2d62eea2026987fa310742cd6068a

From pve-no-subscription:
  • running kernel: 6.1.2-1-pve
  • zfsutils-linux: 2.1.7-pve3

Restores and consecutive backups are working again. Thank you, Sir! :)
 
drm_mode_object_unregister still crashes with several stack traces when unloading/unbinding amdgpu with pve-kernel-6.1.2-1-pve.
This worked without strack-traces with kernel 5.19. Is this something Proxmox might be causing or be able to fix, or is it an upstream driver issue that we have to live with?
 
or is it an upstream driver issue that we have to live with?
While it probably is the latter you do not have to live with it. You can report it upstream (kernel bugzilla - albeit not all reports get a lot of attention, depends really on the subsystem) or on the mailing list used for AMD development CCing maintainers.

If you're able to bisect it doing so would be extremly helpfull for anybody to find and fix the underlying cause.
 
  • Like
Reactions: leesteken
It seems ZFS 2.1.7 received still misses some compat with the 6.1 kernel, which reworked parts of the VFS layer w.r.t. tempfile handling. We notified ZFS upstream with a minimal reproducer for now and will look into providing a stop-gap fix for this if upstream needs more time to handle it as they deem correctly.

Until then we recommend to either avoid the initial pve-kernel-6.1.0-1-pve package if your having your root filesystem on ZFS, or move the /tmp directory away from ZFS. e.g., by making it a tmpfs mount.
Latest zfs-2.1.8 list 6.1 compatibility.
 
  • Like
Reactions: Neobin
Latest zfs-2.1.8 list 6.1 compatibility.
We already backported the respective fix back to our 2.1.7-pve3 that has been moved to pve-no-subscription on 2023-01-16, that's why @Neobin wrote:
From pve-no-subscription:
  • running kernel: 6.1.2-1-pve
  • zfsutils-linux: 2.1.7-pve3

Restores and consecutive backups are working again.
 
when's it due to hit the enterprise repo ?
Will there be a posting on this thread when this kernel is available on the enterprise repository?
As of now the 6.1 opt-in kernel is also available on the enterprise repository, including the 2.1.7 ZFS with the extra Linux 6.1 compatibility patch backported.
 
FYI for some reason Intel GPU debug tools don't work with 6.1 (No problems with 5.19 or stock kernel)

intel_gpu_top gives Failed to initialize PMU! (Permission denied)

vainfo fails as well with other error messages.

Hardware acceleration still seems to work fine though.

Using Raptor Lake.
 
11th gen intel here and HW acceleration would stop working for me completely after a couple of passes. Nothing definitive in the Plex/emby logs so I just reverted to 5.15. I thought it was just me or my particular setup.
 
Installed 6.1 kernel but vm's crash when passing through sata disks to vm. VM will not start again until disks are removed from hardware list.
Do not have any logs I am afraid, just the info.

Rolled back to 5.19 and now it's all good.
 
Last edited:
  • Like
Reactions: xi784
Installed 6.1 kernel but vm's crash when passing through sata disks to vm. VM will not start again until disks are removed from hardware list.
Do not have any logs I am afraid, just the info.

Rolled back to 5.19 and now it's all good.
Would confirm this with passthrough block devices on 6.1.6-1, as well as a lot of errors in the syslog .. going back to 6.1.2.-1 .. everything ok again

Feb 01 19:48:05 pve-p-ma kernel: BUG: kernel NULL pointer dereference, address: 0000000000000005
Feb 01 19:48:05 pve-p-ma kernel: #PF: supervisor write access in kernel mode
Feb 01 19:48:05 pve-p-ma kernel: #PF: error_code(0x0002) - not-present page
Feb 01 19:48:05 pve-p-ma kernel: PGD 0 P4D 0
Feb 01 19:48:05 pve-p-ma kernel: Oops: 0002 [#1] PREEMPT SMP NOPTI
Feb 01 19:48:05 pve-p-ma kernel: CPU: 27 PID: 4342 Comm: kvm Tainted: P O 6.1.6-1-pve #1
Feb 01 19:48:05 pve-p-ma kernel: Hardware name: ASUS System Product Name/Pro WS WRX80E-SAGE SE WIFI, BIOS 1003 02/18/2022
Feb 01 19:48:05 pve-p-ma kernel: RIP: 0010:__bio_split_to_limits+0x226/0x490
Feb 01 19:48:05 pve-p-ma kernel: Code: f7 de 23 74 24 24 89 74 24 24 c1 ee 09 48 8b 0c 24 ba 00 0c 00 00 4c 89 e7 e8 06 39 ff ff 49 89 c5 48 85 c0 0f 84 fd 00 00 00 <41> 81 4d 10 00 40 00 00 41 8b 5d 28 48 b8 00 00 00 00 00 00 00 80
Feb 01 19:48:05 pve-p-ma kernel: RSP: 0018:ffffbf800bcef998 EFLAGS: 00010286
Feb 01 19:48:05 pve-p-ma kernel: RAX: ffff9e5d6f8c4301 RBX: 0000000000056000 RCX: 000000000005001b
Feb 01 19:48:05 pve-p-ma kernel: RDX: 000000000004e01b RSI: 0d6e4021a3d854a8 RDI: 000000000003a520
Feb 01 19:48:05 pve-p-ma kernel: RBP: ffffbf800bcefa00 R08: ffff9e5d6f8c7200 R09: 0000000000001000
Feb 01 19:48:05 pve-p-ma kernel: R10: 0000000000001000 R11: 0000000000001000 R12: ffff9e5d6f8c7240
Feb 01 19:48:05 pve-p-ma kernel: R13: fffffffffffffff5 R14: 0000000000000000 R15: ffff9e5cc16a5ea8
Feb 01 19:48:05 pve-p-ma kernel: FS: 00007fdeb5353040(0000) GS:ffff9e9afd8c0000(0000) knlGS:0000000000000000
Feb 01 19:48:05 pve-p-ma kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 01 19:48:05 pve-p-ma kernel: CR2: 0000000000000005 CR3: 0000000112abc000 CR4: 0000000000350ee0
Feb 01 19:48:05 pve-p-ma kernel: Call Trace:
Feb 01 19:48:05 pve-p-ma kernel: <TASK>
Feb 01 19:48:05 pve-p-ma kernel: blk_mq_submit_bio+0xae/0x590
Feb 01 19:48:05 pve-p-ma kernel: ? __iov_iter_get_pages_alloc+0x149/0x900
Feb 01 19:48:05 pve-p-ma kernel: __submit_bio+0xff/0x190
Feb 01 19:48:05 pve-p-ma kernel: submit_bio_noacct_nocheck+0x257/0x2a0
Feb 01 19:48:05 pve-p-ma kernel: submit_bio_noacct+0x20d/0x610
Feb 01 19:48:05 pve-p-ma kernel: submit_bio+0x6f/0x90
Feb 01 19:48:05 pve-p-ma kernel: __blkdev_direct_IO_async+0x124/0x200
Feb 01 19:48:05 pve-p-ma kernel: blkdev_read_iter+0xd3/0x200
Feb 01 19:48:05 pve-p-ma kernel: io_read+0xd3/0x510
Feb 01 19:48:05 pve-p-ma kernel: ? get_sigset_argpack.constprop.0+0x70/0x70
Feb 01 19:48:05 pve-p-ma kernel: ? fget+0x83/0xb0
Feb 01 19:48:05 pve-p-ma kernel: ? io_writev_prep_async+0x80/0x80
Feb 01 19:48:05 pve-p-ma kernel: io_issue_sqe+0x6b/0x410
Feb 01 19:48:05 pve-p-ma kernel: ? aa_file_perm+0x12f/0x630
Feb 01 19:48:05 pve-p-ma kernel: io_submit_sqes+0x21b/0x650
Feb 01 19:48:05 pve-p-ma kernel: ? __fget_light.part.0+0x8c/0xd0
Feb 01 19:48:05 pve-p-ma kernel: __do_sys_io_uring_enter+0x39d/0xa50
Feb 01 19:48:05 pve-p-ma kernel: ? wake_up_q+0x90/0x90
Feb 01 19:48:05 pve-p-ma kernel: __x64_sys_io_uring_enter+0x29/0x30
Feb 01 19:48:05 pve-p-ma kernel: do_syscall_64+0x59/0x90
Feb 01 19:48:05 pve-p-ma kernel: ? exit_to_user_mode_prepare+0x37/0x180
Feb 01 19:48:05 pve-p-ma kernel: ? syscall_exit_to_user_mode+0x26/0x50
Feb 01 19:48:05 pve-p-ma kernel: ? __x64_sys_read+0x1a/0x20
Feb 01 19:48:05 pve-p-ma kernel: ? do_syscall_64+0x69/0x90
Feb 01 19:48:05 pve-p-ma kernel: ? do_syscall_64+0x69/0x90
Feb 01 19:48:05 pve-p-ma kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd
Feb 01 19:48:05 pve-p-ma kernel: RIP: 0033:0x7fdebfea72e9
Feb 01 19:48:05 pve-p-ma kernel: Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 77 8b 0d 00 f7 d8 64 89 01 48
Feb 01 19:48:05 pve-p-ma kernel: RSP: 002b:00007ffc8d8be0c8 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
Feb 01 19:48:05 pve-p-ma kernel: RAX: ffffffffffffffda RBX: 00007fde876f68e0 RCX: 00007fdebfea72e9
Feb 01 19:48:05 pve-p-ma kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000015
Feb 01 19:48:05 pve-p-ma kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
Feb 01 19:48:05 pve-p-ma kernel: R10: 0000000000000000 R11: 0000000000000216 R12: 0000563caa63a7f8
Feb 01 19:48:05 pve-p-ma kernel: R13: 0000563caa63a8b0 R14: 0000563caa63a7f0 R15: 0000000000000000
Feb 01 19:48:05 pve-p-ma kernel: </TASK>
Feb 01 19:48:05 pve-p-ma kernel: Modules linked in: ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter nf_tables 8021q garp mrp bonding tls softdog nfnetlink_log nfnetlink ipmi_ssif input_leds hid_generic usbkbd usbhid hid r8153_ecm cdc_ether usbnet intel_rapl_msr intel_rapl_common edac_mce_amd kvm_amd kvm ast drm_vram_helper drm_ttm_helper ttm crct10dif_pclmul drm_kms_helper polyval_clmulni polyval_generic ghash_clmulni_intel drm sha512_ssse3 aesni_intel eeepc_wmi asus_wmi acpi_ipmi i2c_algo_bit crypto_simd ledtrig_audio fb_sys_fops ipmi_si syscopyarea sparse_keymap r8152 cryptd ipmi_devintf sysfillrect platform_profile rapl video efi_pstore wmi_bmof ccp mii sysimgblt k10temp mxm_wmi ipmi_msghandler mac_hid zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nct6775 nct6775_core hwmon_vid sunrpc
Feb 01 19:48:05 pve-p-ma kernel: ip_tables x_tables autofs4 uas usb_storage raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 multipath linear simplefb raid0 xhci_pci xhci_pci_renesas xhci_hcd ixgbe nvme crc32_pclmul vfio_pci xfrm_algo ahci dca nvme_core vfio_pci_core libahci mdio nvme_common irqbypass vfio_virqfd vfio_iommu_type1 i2c_piix4 vfio wmi
Feb 01 19:48:05 pve-p-ma kernel: CR2: 0000000000000005
Feb 01 19:48:05 pve-p-ma kernel: ---[ end trace 0000000000000000 ]---
Feb 01 19:48:05 pve-p-ma kernel: RIP: 0010:__bio_split_to_limits+0x226/0x490
Feb 01 19:48:05 pve-p-ma kernel: Code: f7 de 23 74 24 24 89 74 24 24 c1 ee 09 48 8b 0c 24 ba 00 0c 00 00 4c 89 e7 e8 06 39 ff ff 49 89 c5 48 85 c0 0f 84 fd 00 00 00 <41> 81 4d 10 00 40 00 00 41 8b 5d 28 48 b8 00 00 00 00 00 00 00 80
Feb 01 19:48:05 pve-p-ma kernel: RSP: 0018:ffffbf800bcef998 EFLAGS: 00010286
Feb 01 19:48:05 pve-p-ma kernel: RAX: ffff9e5d6f8c4301 RBX: 0000000000056000 RCX: 000000000005001b
Feb 01 19:48:05 pve-p-ma kernel: RDX: 000000000004e01b RSI: 0d6e4021a3d854a8 RDI: 000000000003a520
Feb 01 19:48:05 pve-p-ma kernel: RBP: ffffbf800bcefa00 R08: ffff9e5d6f8c7200 R09: 0000000000001000
Feb 01 19:48:05 pve-p-ma kernel: R10: 0000000000001000 R11: 0000000000001000 R12: ffff9e5d6f8c7240
Feb 01 19:48:05 pve-p-ma kernel: R13: fffffffffffffff5 R14: 0000000000000000 R15: ffff9e5cc16a5ea8
Feb 01 19:48:05 pve-p-ma kernel: FS: 00007fdeb5353040(0000) GS:ffff9e9afd8c0000(0000) knlGS:0000000000000000
Feb 01 19:48:05 pve-p-ma kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 01 19:48:05 pve-p-ma kernel: CR2: 0000000000000005 CR3: 0000000112abc000 CR4: 0000000000350ee0
Feb 01 19:48:06 pve-p-ma systemd-timesyncd[3091]: Initial synchronization to time server 88.99.76.254:123 (2.debian.pool.ntp.org).
Feb 01 19:48:17 pve-p-ma pvestatd[4142]: VM 510 qmp command failed - VM 510 qmp command 'query-proxmox-support' failed - got timeout
Feb 01 19:48:20 pve-p-ma pve-guests[4325]: <root@pam> end task UPID:pve-p-ma:000010E6:00000A3D:63DAB3D4:startall::root@pam: OK
Feb 01 19:48:20 pve-p-ma systemd[1]: Finished PVE guests.
Feb 01 19:48:20 pve-p-ma systemd[1]: Starting Proxmox VE scheduler...
Feb 01 19:48:20 pve-p-ma pvestatd[4142]: storage 'remote-backup01' is not online
Feb 01 19:48:21 pve-p-ma pvescheduler[4564]: starting server
Feb 01 19:48:21 pve-p-ma systemd[1]: Started Proxmox VE scheduler.
Feb 01 19:48:21 pve-p-ma systemd[1]: Reached target Multi-User System.
Feb 01 19:48:21 pve-p-ma systemd[1]: Reached target Graphical Interface.
Feb 01 19:48:21 pve-p-ma systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 01 19:48:21 pve-p-ma systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 01 19:48:21 pve-p-ma systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 01 19:48:21 pve-p-ma systemd[1]: Startup finished in 59.363s (firmware) + 7.538s (loader) + 11.092s (kernel) + 47.740s (userspace) = 2min 5.735s.
Feb 01 19:48:23 pve-p-ma pvestatd[4142]: storage 'remote-storage01' is not online
Feb 01 19:48:23 pve-p-ma pvestatd[4142]: status update time (14.192 seconds)
Feb 01 19:48:32 pve-p-ma pvestatd[4142]: VM 510 qmp command failed - VM 510 qmp command 'query-proxmox-support' failed - unable to connect to VM 510 qmp socket - timeout after 51 retries
etc.
 
Last edited:
  • Like
Reactions: Daevien
Would confirm this with passthrough block devices on 6.1.6-1, as well as a lot of errors in the syslog .. going back to 6.1.2.-1 .. everything ok again

Feb 01 19:48:05 pve-p-ma kernel: BUG: kernel NULL pointer dereference, address: 0000000000000005
Feb 01 19:48:05 pve-p-ma kernel: #PF: supervisor write access in kernel mode
Feb 01 19:48:05 pve-p-ma kernel: #PF: error_code(0x0002) - not-present page
Feb 01 19:48:05 pve-p-ma kernel: PGD 0 P4D 0
Feb 01 19:48:05 pve-p-ma kernel: Oops: 0002 [#1] PREEMPT SMP NOPTI
Feb 01 19:48:05 pve-p-ma kernel: CPU: 27 PID: 4342 Comm: kvm Tainted: P O 6.1.6-1-pve #1
Feb 01 19:48:05 pve-p-ma kernel: Hardware name: ASUS System Product Name/Pro WS WRX80E-SAGE SE WIFI, BIOS 1003 02/18/2022
Feb 01 19:48:05 pve-p-ma kernel: RIP: 0010:__bio_split_to_limits+0x226/0x490
Feb 01 19:48:05 pve-p-ma kernel: Code: f7 de 23 74 24 24 89 74 24 24 c1 ee 09 48 8b 0c 24 ba 00 0c 00 00 4c 89 e7 e8 06 39 ff ff 49 89 c5 48 85 c0 0f 84 fd 00 00 00 <41> 81 4d 10 00 40 00 00 41 8b 5d 28 48 b8 00 00 00 00 00 00 00 80
Feb 01 19:48:05 pve-p-ma kernel: RSP: 0018:ffffbf800bcef998 EFLAGS: 00010286
Feb 01 19:48:05 pve-p-ma kernel: RAX: ffff9e5d6f8c4301 RBX: 0000000000056000 RCX: 000000000005001b
Feb 01 19:48:05 pve-p-ma kernel: RDX: 000000000004e01b RSI: 0d6e4021a3d854a8 RDI: 000000000003a520
Feb 01 19:48:05 pve-p-ma kernel: RBP: ffffbf800bcefa00 R08: ffff9e5d6f8c7200 R09: 0000000000001000
Feb 01 19:48:05 pve-p-ma kernel: R10: 0000000000001000 R11: 0000000000001000 R12: ffff9e5d6f8c7240
Feb 01 19:48:05 pve-p-ma kernel: R13: fffffffffffffff5 R14: 0000000000000000 R15: ffff9e5cc16a5ea8
Feb 01 19:48:05 pve-p-ma kernel: FS: 00007fdeb5353040(0000) GS:ffff9e9afd8c0000(0000) knlGS:0000000000000000
Feb 01 19:48:05 pve-p-ma kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 01 19:48:05 pve-p-ma kernel: CR2: 0000000000000005 CR3: 0000000112abc000 CR4: 0000000000350ee0
Feb 01 19:48:05 pve-p-ma kernel: Call Trace:
Feb 01 19:48:05 pve-p-ma kernel: <TASK>
Feb 01 19:48:05 pve-p-ma kernel: blk_mq_submit_bio+0xae/0x590
Feb 01 19:48:05 pve-p-ma kernel: ? __iov_iter_get_pages_alloc+0x149/0x900
Feb 01 19:48:05 pve-p-ma kernel: __submit_bio+0xff/0x190
Feb 01 19:48:05 pve-p-ma kernel: submit_bio_noacct_nocheck+0x257/0x2a0
Feb 01 19:48:05 pve-p-ma kernel: submit_bio_noacct+0x20d/0x610
Feb 01 19:48:05 pve-p-ma kernel: submit_bio+0x6f/0x90
Feb 01 19:48:05 pve-p-ma kernel: __blkdev_direct_IO_async+0x124/0x200
Feb 01 19:48:05 pve-p-ma kernel: blkdev_read_iter+0xd3/0x200
Feb 01 19:48:05 pve-p-ma kernel: io_read+0xd3/0x510
Feb 01 19:48:05 pve-p-ma kernel: ? get_sigset_argpack.constprop.0+0x70/0x70
Feb 01 19:48:05 pve-p-ma kernel: ? fget+0x83/0xb0
Feb 01 19:48:05 pve-p-ma kernel: ? io_writev_prep_async+0x80/0x80
Feb 01 19:48:05 pve-p-ma kernel: io_issue_sqe+0x6b/0x410
Feb 01 19:48:05 pve-p-ma kernel: ? aa_file_perm+0x12f/0x630
Feb 01 19:48:05 pve-p-ma kernel: io_submit_sqes+0x21b/0x650
Feb 01 19:48:05 pve-p-ma kernel: ? __fget_light.part.0+0x8c/0xd0
Feb 01 19:48:05 pve-p-ma kernel: __do_sys_io_uring_enter+0x39d/0xa50
Feb 01 19:48:05 pve-p-ma kernel: ? wake_up_q+0x90/0x90
Feb 01 19:48:05 pve-p-ma kernel: __x64_sys_io_uring_enter+0x29/0x30
Feb 01 19:48:05 pve-p-ma kernel: do_syscall_64+0x59/0x90
Feb 01 19:48:05 pve-p-ma kernel: ? exit_to_user_mode_prepare+0x37/0x180
Feb 01 19:48:05 pve-p-ma kernel: ? syscall_exit_to_user_mode+0x26/0x50
Feb 01 19:48:05 pve-p-ma kernel: ? __x64_sys_read+0x1a/0x20
Feb 01 19:48:05 pve-p-ma kernel: ? do_syscall_64+0x69/0x90
Feb 01 19:48:05 pve-p-ma kernel: ? do_syscall_64+0x69/0x90
Feb 01 19:48:05 pve-p-ma kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd
Feb 01 19:48:05 pve-p-ma kernel: RIP: 0033:0x7fdebfea72e9
Feb 01 19:48:05 pve-p-ma kernel: Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 77 8b 0d 00 f7 d8 64 89 01 48
Feb 01 19:48:05 pve-p-ma kernel: RSP: 002b:00007ffc8d8be0c8 EFLAGS: 00000216 ORIG_RAX: 00000000000001aa
Feb 01 19:48:05 pve-p-ma kernel: RAX: ffffffffffffffda RBX: 00007fde876f68e0 RCX: 00007fdebfea72e9
Feb 01 19:48:05 pve-p-ma kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000015
Feb 01 19:48:05 pve-p-ma kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000008
Feb 01 19:48:05 pve-p-ma kernel: R10: 0000000000000000 R11: 0000000000000216 R12: 0000563caa63a7f8
Feb 01 19:48:05 pve-p-ma kernel: R13: 0000563caa63a8b0 R14: 0000563caa63a7f0 R15: 0000000000000000
Feb 01 19:48:05 pve-p-ma kernel: </TASK>
Feb 01 19:48:05 pve-p-ma kernel: Modules linked in: ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter nf_tables 8021q garp mrp bonding tls softdog nfnetlink_log nfnetlink ipmi_ssif input_leds hid_generic usbkbd usbhid hid r8153_ecm cdc_ether usbnet intel_rapl_msr intel_rapl_common edac_mce_amd kvm_amd kvm ast drm_vram_helper drm_ttm_helper ttm crct10dif_pclmul drm_kms_helper polyval_clmulni polyval_generic ghash_clmulni_intel drm sha512_ssse3 aesni_intel eeepc_wmi asus_wmi acpi_ipmi i2c_algo_bit crypto_simd ledtrig_audio fb_sys_fops ipmi_si syscopyarea sparse_keymap r8152 cryptd ipmi_devintf sysfillrect platform_profile rapl video efi_pstore wmi_bmof ccp mii sysimgblt k10temp mxm_wmi ipmi_msghandler mac_hid zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nct6775 nct6775_core hwmon_vid sunrpc
Feb 01 19:48:05 pve-p-ma kernel: ip_tables x_tables autofs4 uas usb_storage raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 multipath linear simplefb raid0 xhci_pci xhci_pci_renesas xhci_hcd ixgbe nvme crc32_pclmul vfio_pci xfrm_algo ahci dca nvme_core vfio_pci_core libahci mdio nvme_common irqbypass vfio_virqfd vfio_iommu_type1 i2c_piix4 vfio wmi
Feb 01 19:48:05 pve-p-ma kernel: CR2: 0000000000000005
Feb 01 19:48:05 pve-p-ma kernel: ---[ end trace 0000000000000000 ]---
Feb 01 19:48:05 pve-p-ma kernel: RIP: 0010:__bio_split_to_limits+0x226/0x490
Feb 01 19:48:05 pve-p-ma kernel: Code: f7 de 23 74 24 24 89 74 24 24 c1 ee 09 48 8b 0c 24 ba 00 0c 00 00 4c 89 e7 e8 06 39 ff ff 49 89 c5 48 85 c0 0f 84 fd 00 00 00 <41> 81 4d 10 00 40 00 00 41 8b 5d 28 48 b8 00 00 00 00 00 00 00 80
Feb 01 19:48:05 pve-p-ma kernel: RSP: 0018:ffffbf800bcef998 EFLAGS: 00010286
Feb 01 19:48:05 pve-p-ma kernel: RAX: ffff9e5d6f8c4301 RBX: 0000000000056000 RCX: 000000000005001b
Feb 01 19:48:05 pve-p-ma kernel: RDX: 000000000004e01b RSI: 0d6e4021a3d854a8 RDI: 000000000003a520
Feb 01 19:48:05 pve-p-ma kernel: RBP: ffffbf800bcefa00 R08: ffff9e5d6f8c7200 R09: 0000000000001000
Feb 01 19:48:05 pve-p-ma kernel: R10: 0000000000001000 R11: 0000000000001000 R12: ffff9e5d6f8c7240
Feb 01 19:48:05 pve-p-ma kernel: R13: fffffffffffffff5 R14: 0000000000000000 R15: ffff9e5cc16a5ea8
Feb 01 19:48:05 pve-p-ma kernel: FS: 00007fdeb5353040(0000) GS:ffff9e9afd8c0000(0000) knlGS:0000000000000000
Feb 01 19:48:05 pve-p-ma kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 01 19:48:05 pve-p-ma kernel: CR2: 0000000000000005 CR3: 0000000112abc000 CR4: 0000000000350ee0
Feb 01 19:48:06 pve-p-ma systemd-timesyncd[3091]: Initial synchronization to time server 88.99.76.254:123 (2.debian.pool.ntp.org).
Feb 01 19:48:17 pve-p-ma pvestatd[4142]: VM 510 qmp command failed - VM 510 qmp command 'query-proxmox-support' failed - got timeout
Feb 01 19:48:20 pve-p-ma pve-guests[4325]: <root@pam> end task UPID:pve-p-ma:000010E6:00000A3D:63DAB3D4:startall::root@pam: OK
Feb 01 19:48:20 pve-p-ma systemd[1]: Finished PVE guests.
Feb 01 19:48:20 pve-p-ma systemd[1]: Starting Proxmox VE scheduler...
Feb 01 19:48:20 pve-p-ma pvestatd[4142]: storage 'remote-backup01' is not online
Feb 01 19:48:21 pve-p-ma pvescheduler[4564]: starting server
Feb 01 19:48:21 pve-p-ma systemd[1]: Started Proxmox VE scheduler.
Feb 01 19:48:21 pve-p-ma systemd[1]: Reached target Multi-User System.
Feb 01 19:48:21 pve-p-ma systemd[1]: Reached target Graphical Interface.
Feb 01 19:48:21 pve-p-ma systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 01 19:48:21 pve-p-ma systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 01 19:48:21 pve-p-ma systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 01 19:48:21 pve-p-ma systemd[1]: Startup finished in 59.363s (firmware) + 7.538s (loader) + 11.092s (kernel) + 47.740s (userspace) = 2min 5.735s.
Feb 01 19:48:23 pve-p-ma pvestatd[4142]: storage 'remote-storage01' is not online
Feb 01 19:48:23 pve-p-ma pvestatd[4142]: status update time (14.192 seconds)
Feb 01 19:48:32 pve-p-ma pvestatd[4142]: VM 510 qmp command failed - VM 510 qmp command 'query-proxmox-support' failed - unable to connect to VM 510 qmp socket - timeout after 51 retries
etc.
I have the same problem and I switched back to 6.1.2-1 because there are no problems.

Best regards,

Marcel
 
  • Like
Reactions: Daevien
While it probably is the latter you do not have to live with it. You can report it upstream (kernel bugzilla - albeit not all reports get a lot of attention, depends really on the subsystem) or on the mailing list used for AMD development CCing maintainers.

If you're able to bisect it doing so would be extremly helpfull for anybody to find and fix the underlying cause.
@janssensm was so kind to actually put the effort in to do this and reported back with good news.
 
Installed 6.1 kernel but vm's crash when passing through sata disks to vm. VM will not start again until disks are removed from hardware list.
Do not have any logs I am afraid, just the info.

Rolled back to 5.19 and now it's all good.
I have the same problem and I switched back to 6.1.2-1 because there are no problems.

Best regards,

Marcel
Thanks for reporting this. Can you please post the VM config and the disk model you're passing through (just to be sure).
 
Thanks for reporting this. Can you please post the VM config and the disk model you're passing through (just to be sure).
Hi Thomas,

for sure, nothing special in my case:

agent: 1
balloon: 0
bios: ovmf
boot: c
bootdisk: scsi0
cores: 8
cpu: host,hidden=1,flags=-md-clear;-pcid;-spec-ctrl;-ssbd;+ibpb;+virt-ssbd;+amd-ssbd;-amd-no-ssb;+pdpe1gb;-hv-tlbflush;-hv-evmcs;+aes
cpuunits: 90
efidisk0: local:510/vm-510-disk-0.qcow2,size=128K
hotplug: network,usb
hugepages: 1024
machine: q35
memory: 8192
name: OMV5
net0: virtio=XX:XX:XX,bridge=vmbr1
net1: virtio=XX:XX:XX,bridge=vmbr0
numa: 1
onboot: 1
ostype: l26
scsi0: local-hssd01:510/vm-510-disk-0.raw,backup=0,iothread=1,size=80G,ssd=1
scsi1: /dev/disk/by-id/ata-ST12000VN0007-2GS116_ZJV26PMA,size=11176G
scsi2: /dev/disk/by-id/ata-ST12000VN0007-2GS116_ZJV28FXF,size=11176G
scsi3: /dev/disk/by-id/ata-ST14000NE0008-2RX103_ZL2CWH2V,size=13039G
scsihw: virtio-scsi-single
smbios1: uuid=c798....
sockets: 1
startup: order=5,up=30
tablet: 0
vcpus: 8
vga: none
vmgenid: 578....
vmstatestorage: local-lvm
 
Last edited:
Thanks for reporting this. Can you please post the VM config and the disk model you're passing through (just to be sure).

agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0
cores: 8
cpu: host
efidisk0: vmpool4:vm-100-disk-0,efitype=4m,size=4M
hostpci0: 0000:0e:00.0,pcie=1
hostpci1: 0000:0e:00.1,pcie=1
hostpci2: 0000:0f:00.0,pcie=1
hostpci3: 0000:0f:00.1,pcie=1
machine: q35
memory: 12288
name: router.home.v16.de
numa: 0
onboot: 1
ostype: l26
scsi0: vmpool4:vm-100-disk-1,discard=on,size=122108M,ssd=1
scsi1: /dev/disk/by-id/ata-WDC_WD10EZEX-00WN4A0_WD-WMC6Y0F2TY7W,backup=0,replicate=0,size=976762584K
scsi2: /dev/disk/by-id/ata-WDC_WD10EZEX-08WN4A0_WD-WCC6Y7HRL4Y0,backup=0,replicate=0,size=976762584K
scsihw: virtio-scsi-pci
smbios1: uuid=3375b099-d5bc-4411-ab6f-39380a92b299
sockets: 1
startup: order=10,up=90
usb0: host=5-3
vga: qxl
vmgenid: 1cf9c672-f6d3-4a68-864b-77f8d28a6c93

agent: 1
balloon: 0
boot: c
bootdisk: scsi0
cores: 2
cpu: host
memory: 6144
name: files1.home.v16.de
net0: virtio=EE:13:30:75:08:06,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
scsi0: vmpool3:vm-103-disk-0,discard=on,size=40G,ssd=1
scsi1: /dev/disk/by-id/ata-WDC_WD101EFBX-68B0AN0_VCJNRAAP,backup=0,replicate=0,size=9314G
scsi2: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0H5A8XF,backup=0,replicate=0,size=2930266584K
scsi3: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5VN96LL,backup=0,replicate=0,size=2930266584K
scsi4: /dev/disk/by-id/ata-WDC_WD30EFRX-68N32N0_WD-WCC7K7HPYL5T,backup=0,replicate=0,size=2930266584K
scsi5: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1TJ0VR4,backup=0,replicate=0,size=2930266584K
scsi6: /dev/disk/by-id/ata-WDC_WD40EZRZ-22GXCB0_WD-WCC7K3FDDK7A,backup=0,replicate=0,size=3907018584K
scsi7: /dev/disk/by-id/ata-ST1000LM024_HN-M101MBB_S30YJ9HF404343,backup=0,replicate=0,size=976762584K
scsi8: /dev/disk/by-id/ata-WDC_WD161KFGX-68AFPN0_2PH9STAT,backup=0,replicate=0,size=14902G
scsihw: virtio-scsi-pci
smbios1: uuid=a80184f3-8dd7-4178-a6c7-d094b19f6127
sockets: 1
startup: order=40,up=10
vga: qxl
vmgenid: 65288aae-3b41-4656-8d4c-87c147218db3

agent: 1
balloon: 0
boot: order=scsi0
cores: 4
cpu: qemu64
keyboard: de
machine: pc-i440fx-7.1
memory: 6144
name: server1.home.v16.de
net0: virtio=C2:5A:85:EE:D3:BB,bridge=vmbr1
numa: 0
onboot: 1
ostype: win11
scsi0: vmpool2:vm-106-disk-0,discard=on,size=100G,ssd=1
scsi1: vmpool2:vm-106-disk-1,discard=on,size=40G,ssd=1
scsi2: /dev/disk/by-id/ata-ST1000LM048-2E7172_WKPCDBYV,backup=0,replicate=0,size=976762584K
scsi3: /dev/disk/by-id/ata-WDC_WDS500G2B0A-00SM50_200521802715,backup=0,replicate=0,size=488386584K
scsihw: virtio-scsi-pci
smbios1: uuid=3aa0478f-b120-41b5-8aaa-2c4f1263b51c
sockets: 1
startup: order=70,up=30
vga: qxl
vmgenid: c47a14b1-b422-443a-91bf-827063ff0de2
 
Thanks for reporting this. Can you please post the VM config and the disk model you're passing through (just to be sure).
Disk model for the three drives is Toshiba MG09
Passing through the Kingston nvme disk seems to be fine.

Code:
boot: order=scsi0;net0
cores: 12
cpu: host
memory: 10240
meta: creation-qemu=7.1.0,ctime=1670778339
name: myhost
net0: virtio=3A:10:79:77:F9:E3,bridge=vmbr0,firewall=1,tag=6
net1: virtio=16:E0:02:E8:08:B9,bridge=vmbr0,firewall=1,tag=7
numa: 0
onboot: 1
ostype: l26
scsi0: horder:vm-106-disk-0,discard=on,iothread=1,size=60G
scsi1: /dev/disk/by-id/ata-TOSHIBA_MG09ACA18TE_62X0A0SUFJDH,size=16764G
scsi2: /dev/disk/by-id/ata-TOSHIBA_MG09ACA18TE_8260A051FJDH,size=16764G
scsi4: /dev/disk/by-id/nvme-KINGSTON_SFYRS1000G_50026B76861A8B19,size=976762584K
scsi5: /dev/disk/by-id/ata-TOSHIBA_MG09ACA18TE_52F0A24RFJDH,size=16764G
scsihw: virtio-scsi-single
smbios1: uuid=f738093a-87a5-4084-8482-6692d8b2ff94
sockets: 1
vmgenid: 15800368-a5f5-4739-8bda-7449a5ea4045
 
Any one else notice some major performance changes going from 5.15.x -> 6.1.x?

Upgraded one of my heavy hitter front ends (Quad Socket DL 560 Gen10) and now we are seeing a major load increases. CPU load has almost doubled.

Going back to 5.15.x has corrected the issue, but with live migration broke in 5.15 it makes things tough.

1675700325006.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!