Proxmox VE 7.2 megaraid issues

Hello,
in proxmox 7.4-1 and kernel 5.19.17-2-pve UBSAN messages still occure

Code:
================================================================================
UBSAN: array-index-out-of-bounds in drivers/scsi/megaraid/megaraid_sas_fp.c:125:9
index 1 is out of range for type 'MR_LD_SPAN_MAP [1]'
CPU: 0 PID: 292 Comm: kworker/0:1H Not tainted 5.19.17-2-pve #1
Hardware name: Epsylon Super Server/H12DSi-N6, BIOS 2.6 04/13/2023
Workqueue: kblockd blk_mq_run_work_fn
Call Trace:
 <TASK>
 dump_stack_lvl+0x49/0x63
 dump_stack+0x10/0x16
 ubsan_epilogue+0x9/0x3f
 __ubsan_handle_out_of_bounds.cold+0x44/0x49
 get_updated_dev_handle+0x2de/0x360 [megaraid_sas]
 megasas_build_and_issue_cmd_fusion+0x1617/0x17e0 [megaraid_sas]
 megasas_queue_command+0x196/0x1f0 [megaraid_sas]
 scsi_queue_rq+0x3c1/0xc30
 blk_mq_dispatch_rq_list+0x1e7/0x860
 ? sbitmap_get+0xce/0x220
 blk_mq_do_dispatch_sched+0x313/0x370
 __blk_mq_sched_dispatch_requests+0x103/0x150
 blk_mq_sched_dispatch_requests+0x35/0x70
 __blk_mq_run_hw_queue+0x3b/0xb0
 blk_mq_run_work_fn+0x1f/0x30
 process_one_work+0x21f/0x3f0
 worker_thread+0x50/0x3e0
 ? rescuer_thread+0x3a0/0x3a0
 kthread+0xf0/0x120
 ? kthread_complete_and_exit+0x20/0x20
 ret_from_fork+0x22/0x30
 </TASK>
================================================================================


Code:
proxmox-ve: 7.4-1 (running kernel: 5.19.17-2-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-7
pve-kernel-5.19: 7.2-15
pve-kernel-5.19.17-2-pve: 5.19.17-2
pve-kernel-5.15.126-1-pve: 5.15.126-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.3-1
proxmox-backup-file-restore: 2.4.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-5
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
 
Hi,
Hello,
in proxmox 7.4-1 and kernel 5.19.17-2-pve UBSAN messages still occure

Code:
================================================================================
UBSAN: array-index-out-of-bounds in drivers/scsi/megaraid/megaraid_sas_fp.c:125:9
index 1 is out of range for type 'MR_LD_SPAN_MAP [1]'
CPU: 0 PID: 292 Comm: kworker/0:1H Not tainted 5.19.17-2-pve #1
Hardware name: Epsylon Super Server/H12DSi-N6, BIOS 2.6 04/13/2023
Workqueue: kblockd blk_mq_run_work_fn
Call Trace:
 <TASK>
 dump_stack_lvl+0x49/0x63
 dump_stack+0x10/0x16
 ubsan_epilogue+0x9/0x3f
 __ubsan_handle_out_of_bounds.cold+0x44/0x49
 get_updated_dev_handle+0x2de/0x360 [megaraid_sas]
 megasas_build_and_issue_cmd_fusion+0x1617/0x17e0 [megaraid_sas]
 megasas_queue_command+0x196/0x1f0 [megaraid_sas]
 scsi_queue_rq+0x3c1/0xc30
 blk_mq_dispatch_rq_list+0x1e7/0x860
 ? sbitmap_get+0xce/0x220
 blk_mq_do_dispatch_sched+0x313/0x370
 __blk_mq_sched_dispatch_requests+0x103/0x150
 blk_mq_sched_dispatch_requests+0x35/0x70
 __blk_mq_run_hw_queue+0x3b/0xb0
 blk_mq_run_work_fn+0x1f/0x30
 process_one_work+0x21f/0x3f0
 worker_thread+0x50/0x3e0
 ? rescuer_thread+0x3a0/0x3a0
 kthread+0xf0/0x120
 ? kthread_complete_and_exit+0x20/0x20
 ret_from_fork+0x22/0x30
 </TASK>
================================================================================


Code:
proxmox-ve: 7.4-1 (running kernel: 5.19.17-2-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-7
pve-kernel-5.19: 7.2-15
pve-kernel-5.19.17-2-pve: 5.19.17-2
pve-kernel-5.15.126-1-pve: 5.15.126-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.3-1
proxmox-backup-file-restore: 2.4.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-5
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1
kernel 5.19 hasn't been updated for a long time now. It was superseded by the 6.1 opt-in kernel and then the 6.2 opt-in kernel. Please upgrade to that and see if the issue persists: https://forum.proxmox.com/threads/opt-in-linux-6-2-kernel-for-proxmox-ve-7-x-available.124189
 
Hi everyone, I had a bad experience. Proxmox showed me the error I can see in the attached image, so I tried to reboot and the result was that the VM no longer started. I had to delete the storage, perform a Wipe Disk, recreate the volume group (and the storage) and restore the VM. The restore started on Saturday 10 February 2024 at 09:23 and is still in progress. We are at 68%, the VM has a 107 TB disk, of which 86TB is in use.... I had no other solution
 

Attachments

  • T03.jpg
    T03.jpg
    43.8 KB · Views: 7
Hi everyone, I had a bad experience. Proxmox showed me the error I can see in the attached image,
Hi, the error discussed in this thread is a false positive and can't be the reason for your problem.
so I tried to reboot
How did you reboot?
and the result was that the VM no longer started.
Did you give the VM enough time to shut down?
Does your VM use sata for storage? If so, you might have run into https://bugzilla.proxmox.com/show_bug.cgi?id=2874.
 
Hi,
Hi everyone, I had a bad experience. Proxmox showed me the error I can see in the attached image, so I tried to reboot and the result was that the VM no longer started. I had to delete the storage, perform a Wipe Disk, recreate the volume group (and the storage) and restore the VM. The restore started on Saturday 10 February 2024 at 09:23 and is still in progress. We are at 68%, the VM has a 107 TB disk, of which 86TB is in use.... I had no other solution
as @mow already said, the message in your screenshot is unrelated. It's a warning from UBSAN which is used to detect out-of-bounds accesses in kernel code. But the kernel also uses dynamically-sized arrays and the in-kernel convention changed a while ago causing false positives. Probably fixed in kernels >= 6.1 by: https://git.kernel.org/pub/scm/linu.../?id=eeb3bab77244b8d91e4e9b611177cd1196900163

Please post the VM configuration qm config <ID> and output of pveversion -v. Please also check your system logs/journal from around the time the issue happened.

If it really is bug #2874, you can try a tool like TestDisk to recover the partition table: https://www.cgsecurity.org/wiki/TestDisk
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!