No SAS2008 after upgrade

Does pve-kernel-6.2.6-1-pve work for you?

It's the last working kernel for my SAS2008.
My SAS2008 firmware is older than yours.
6.2.6 works here, 6.2.9 and 6.2.11 didn't.

The mpt3sas versions are the same, so it's likely to do with the changes the way kernel reserve memory for PCI devices between 6.2.6 and 6.2.9

Code:
root@pve:~# uname -a
Linux pve 6.2.6-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.6-1 (2023-03-14T17:08Z) x86_64 GNU/Linux

root@pve:~# dmesg | grep mpt
[    0.005485]   Device   empty
[    0.104414] Dynamic Preempt: voluntary
[    0.104433] rcu: Preemptible hierarchical RCU implementation.
[    3.094204] mpt3sas version 43.100.00.00 loaded
[    3.094303] mpt3sas 0000:25:00.0: enabling device (0000 -> 0002)
[    3.094351] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (32790076 kB)
[    3.148088] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    3.148102] mpt2sas_cm0: MSI-X vectors supported: 1
[    3.148105] mpt2sas_cm0:  0 1 1
[    3.148175] mpt2sas_cm0: High IOPs queues : disabled
[    3.148177] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 48
[    3.148178] mpt2sas_cm0: iomem(0x00000000c0540000), mapped(0x000000008b334849), size(16384)
[    3.148181] mpt2sas_cm0: ioport(0x000000000000d000), size(256)
[    3.202454] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    3.229934] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
[    3.229957] mpt2sas_cm0: request pool(0x0000000046d4089e) - dma(0x111f00000): depth(1836), frame_size(128), pool_size(229 kB)
[    3.231566] mpt2sas_cm0: sense pool(0x000000009cc4a363) - dma(0x112a00000): depth(1599), element_size(96), pool_size (149 kB)
[    3.231568] mpt2sas_cm0: sense pool(0x000000009cc4a363)- dma(0x112a00000): depth(1599),element_size(96), pool_size(0 kB)
[    3.231588] mpt2sas_cm0: reply pool(0x00000000e881631c) - dma(0x112a40000): depth(1900), frame_size(128), pool_size(237 kB)
[    3.231591] mpt2sas_cm0: config page(0x000000006851487f) - dma(0x1129b8000): size(512)
[    3.231592] mpt2sas_cm0: Allocated physical memory: size(3652 kB)
[    3.231592] mpt2sas_cm0: Current Controller Queue Depth(1596),Max Controller Queue Depth(1720)
[    3.231593] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[    3.275603] mpt2sas_cm0: overriding NVDATA EEDPTagMode setting
[    3.275996] mpt2sas_cm0: LSISAS2008: FWVersion(13.00.57.00), ChipRevision(0x03), BiosVersion(07.25.00.00)
[    3.275998] mpt2sas_cm0: Protocol=(Initiator), Capabilities=(Raid,TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    3.276297] mpt2sas_cm0: sending port enable !!
[    4.861162] mpt2sas_cm0: hba_port entry: 00000000f689b86e, port: 255 is added to hba_port list
[    4.862710] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x500605b003fb4400), phys(8)
[    5.633818] mpt2sas_cm0: handle(0x9) sas_address(0x4433221100000000) port_type(0x1)
[    6.132265] mpt2sas_cm0: handle(0xa) sas_address(0x4433221101000000) port_type(0x1)
[    6.629047] mpt2sas_cm0: handle(0xb) sas_address(0x4433221102000000) port_type(0x1)
[    6.992827] mpt2sas_cm0: handle(0xc) sas_address(0x4433221105000000) port_type(0x1)
[    7.178208] mpt2sas_cm0: handle(0xd) sas_address(0x4433221106000000) port_type(0x1)
[    7.423151] mpt2sas_cm0: handle(0xe) sas_address(0x4433221107000000) port_type(0x1)
[    7.735400] mpt2sas_cm0: handle(0xf) sas_address(0x4433221104000000) port_type(0x1)
[   10.755035] mpt2sas_cm0: port enable: SUCCESS
 
Just clean reinstalled Proxmox 8 legacy BIOS so no UEFI and it worked like a charm now.
Linux pve 6.2.16-4-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-5 (2023-07-14T17:53Z) x86_64 GNU/Linux

Wondering what the issue is or if running/booting Proxmox non UEFI has any implications?
 
I just found the below;
https://forum.proxmox.com/threads/issue-with-bcm57840-on-proxmox-5-2.47063/

Adding this "pci=realloc=off" cmdline to the kernel boot in UEFI mode seems to have fixed it....

Code:
Linux pve 6.2.16-4-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-5 (2023-07-14T17:53Z) x86_64 GNU/Linux

Code:
root@pve:~# dmesg | grep mpt
[    0.018914]   Device   empty
[    0.439754] Dynamic Preempt: voluntary
[    0.439833] rcu: Preemptible hierarchical RCU implementation.
[    2.413312] mpt3sas version 43.100.00.00 loaded
[    2.413748] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (65804848 kB)
[    2.468176] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.468203] mpt2sas_cm0: MSI-X vectors supported: 1
[    2.468210] mpt2sas_cm0:  0 1 1
[    2.468283] mpt2sas_cm0: High IOPs queues : disabled
[    2.468288] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 46
[    2.468292] mpt2sas_cm0: iomem(0x00000000fb1c0000), mapped(0x000000002c3cb0d2), size(16384)
[    2.468298] mpt2sas_cm0: ioport(0x000000000000d000), size(256)
[    2.522539] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.550114] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
[    2.550221] mpt2sas_cm0: request pool(0x00000000235f91b3) - dma(0x116d80000): depth(3492), frame_size(128), pool_size(436 kB)
[    2.565995] mpt2sas_cm0: sense pool(0x00000000563c5044) - dma(0x117880000): depth(3367), element_size(96), pool_size (315 kB)
[    2.566107] mpt2sas_cm0: reply pool(0x0000000073733999) - dma(0x117900000): depth(3556), frame_size(128), pool_size(444 kB)
[    2.566118] mpt2sas_cm0: config page(0x000000000fcc6225) - dma(0x117814000): size(512)
[    2.566123] mpt2sas_cm0: Allocated physical memory: size(7579 kB)
[    2.566126] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432)
[    2.566131] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[    2.609855] mpt2sas_cm0: overriding NVDATA EEDPTagMode setting
[    2.610584] mpt2sas_cm0: LSISAS2008: FWVersion(20.00.07.00), ChipRevision(0x03), BiosVersion(07.27.01.01)
[    2.610593] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.611628] mpt2sas_cm0: sending port enable !!
[    4.118392] mpt2sas_cm0: hba_port entry: 00000000ce8856fc, port: 255 is added to hba_port list
[    4.120450] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x5003005233445566), phys(8)
[    4.121123] mpt2sas_cm0: handle(0x9) sas_address(0x5000c50095dd0c2d) port_type(0x1)
[    4.121624] mpt2sas_cm0: handle(0xa) sas_address(0x4433221107000000) port_type(0x1)
[    4.369523] mpt2sas_cm0: handle(0xb) sas_address(0x4433221103000000) port_type(0x1)
[    4.370528] mpt2sas_cm0: handle(0xc) sas_address(0x5000c50095d5a9c9) port_type(0x1)
[    4.371492] mpt2sas_cm0: handle(0xd) sas_address(0x5000c5008e1b1cf9) port_type(0x1)
[    4.372463] mpt2sas_cm0: handle(0xe) sas_address(0x5000c50095daff09) port_type(0x1)
[    4.373434] mpt2sas_cm0: handle(0xf) sas_address(0x5000c50095dd1c41) port_type(0x1)
[    4.374389] mpt2sas_cm0: handle(0x10) sas_address(0x5000c50095dcfc85) port_type(0x1)
[    9.250572] mpt2sas_cm0: port enable: SUCCESS
 
Hi All
I am still afraid to upgrade due to this thread. But I do want to. I have made an rsync backup with this command.
Code:
root@pve:~# rsync -av --exclude={'/ZFS-1.5TB','/zfs-pool0','/r10-12x160','/R10-4x2TB','/R10-4xssd','/mnt','/media','/lost+found','/proc','/tmp'} / /mnt/storage2/backup
Code:
root@pve:~# ls /
bin   etc   lib64       mnt   r10-12x160  root  srv  usr        zfs-pool0
boot  home  lost+found  opt   R10-4x2TB   run   sys  var
dev   lib   media       proc  R10-4xssd   sbin  tmp  ZFS-1.5TB
Code:
root@pve:~# ls /mnt/storage2/backup
bin   dev  home  lib64  proc  run   srv  tmp  var
boot  etc  lib   opt    root  sbin  sys  usr
This is my first time using rsync. Is this sufficient to restore my 7.4 OS?

Thanks
 
Last edited:
I guess you could also take a snapshot - if you are using ZFS as your main filesystem.
If you are it would just be a matter of rolling back to said snapshot to undo any (bad)changes or effects made after that snapshot.
 
Thanks for the reply. The OS is on a hardware raid on a dell h200 controller. Most of the vms are on 4 ssd in a zfs raid 10 on the same controller. Those snapshots and other stuff are stored on an old dell md1000 disk shelf on another lsi SAS2116 controler . Storage2 is an ssd installed in the cd drive bay.
I had not been worried about backing up the OS figuring I could just reinstall proxmox. I now am suspicious of that theory do to this thread.
Thanks
 
For those who are still having this issue under 6.2 and 6.5, can you post your full `dmesg`? I was having this issue and the fix had nothing to do with the driver, and it appears to be a memory reservation issue.
 
I'm still having the same issue with a fresh install of Proxmox 8.02 running kernel 6.2.16-19-pve. I did add the following link to /etc/default/grub and ran update-grub but this did not seem to solve the issue for me. What did I get wrong?

GRUB_CMDLINE_LINUX="pci=realloc=off"

Attached is my dmesg.
 

Attachments

Thank you for the suggestion, much appreciated. I added and run update-grub but seem to have the same issue:

dmesg | grep mpt [ 0.010779] Device empty [ 0.285720] Dynamic Preempt: voluntary [ 0.285751] rcu: Preemptible hierarchical RCU implementation. [ 0.309425] MDS: Vulnerable: Clear CPU buffers attempted, no microcode [ 1.332118] mpt3sas version 43.100.00.00 loaded [ 1.336375] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (65812964 kB) [ 1.411528] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ 1.411541] mpt2sas_cm0: MSI-X vectors supported: 16 [ 1.411545] mpt2sas_cm0: 0 8 8 [ 1.411707] mpt2sas_cm0: High IOPs queues : disabled [ 1.411710] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 60 [ 1.411712] mpt2sas0-msix1: PCI-MSI-X enabled: IRQ 61 [ 1.411713] mpt2sas0-msix2: PCI-MSI-X enabled: IRQ 62 [ 1.411715] mpt2sas0-msix3: PCI-MSI-X enabled: IRQ 63 [ 1.411717] mpt2sas0-msix4: PCI-MSI-X enabled: IRQ 64 [ 1.411718] mpt2sas0-msix5: PCI-MSI-X enabled: IRQ 65 [ 1.411720] mpt2sas0-msix6: PCI-MSI-X enabled: IRQ 66 [ 1.411722] mpt2sas0-msix7: PCI-MSI-X enabled: IRQ 67 [ 1.411723] mpt2sas_cm0: iomem(0x00000000fba40000), mapped(0x00000000ded0adc1), size(65536) [ 1.411728] mpt2sas_cm0: ioport(0x000000000000e000), size(256) [ 1.466043] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ 1.493521] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15) [ 1.493630] mpt2sas_cm0: request pool(0x00000000f9108611) - dma(0x114c00000): depth(3200), frame_size(128), pool_size(400 kB) [ 1.500765] mpt2sas_cm0: sense pool(0x00000000f8c84ac3) - dma(0x115300000): depth(2939), element_size(96), pool_size (275 kB) [ 1.500832] mpt2sas_cm0: reply pool(0x0000000004f0ab28) - dma(0x115380000): depth(3264), frame_size(128), pool_size(408 kB) [ 1.500838] mpt2sas_cm0: config page(0x00000000dd6c161b) - dma(0x11526d000): size(512) [ 1.500841] mpt2sas_cm0: Allocated physical memory: size(7012 kB) [ 1.500843] mpt2sas_cm0: Current Controller Queue Depth(2936),Max Controller Queue Depth(3072) [ 1.500845] mpt2sas_cm0: Scatter Gather Elements per IO(128) [ 1.548349] mpt2sas_cm0: LSISAS2308: FWVersion(14.00.00.00), ChipRevision(0x05), BiosVersion(07.27.00.00) [ 1.548356] mpt2sas_cm0: Protocol=(Initiator), Capabilities=(Raid,TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) [ 1.548948] mpt2sas_cm0: sending port enable !! [ 1.549173] mpt3sas 0000:0b:00.0: BAR 1: can't reserve [mem 0x809c0000-0x809c3fff 64bit] [ 1.549180] mpt2sas_cm1: pci_request_selected_regions: failed [ 1.549214] mpt2sas_cm1: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:12348/_scsih_probe()! [ 4.140036] mpt2sas_cm0: hba_port entry: 00000000e8be26ae, port: 255 is added to hba_port list [ 4.141214] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x5003048019a7b801), phys(8) [ 9.268918] mpt2sas_cm0: port enable: SUCCESS [ 11.193253] systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).

/etc/default/grub looks like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet reserve=0x80000000,0xfffffff" GRUB_CMDLINE_LINUX="reserve=0x80000000,0xfffffff"

Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!