[SOLVED] Proxmox 8.3-1 cant detect disks with LSI MegaRaid

LkS45

Renowned Member
May 9, 2017
74
2
73
39
Hello guys!

I have one new server with a MegaRaid 9440, and the setup cant detect any disk. Im unable to install proxmox with this controller.

I have read some topics here in the forum and some people fix that updating the distro or changing the kernel. But i cant do that with the iso installer right?

Any ideas are welcome!
 
Last edited:
I also had same issue on some servers with megaraid controllers, workaround is use the kernel 6.5 or in case the installer older iso (with it).
Another workaround is to set all disks as jbod and use them with software raids (btrfs, zfs, mdadm etc...).
Another workaround (even if I have not tested it yet) should be: https://rephlex.de/blog/2021/10/16/fix-for-uefi-hardware-raid-linux-megaraid_sas-io_page_fault/
1. i tried the 8.2 iso, continues with problem
2. With raid software/zfs i dnt like cause the megaraid is better, of course, is hardware raid
3. The last option maybe is the way but im not sure cause the link is of 2021.
 
1. proxmox 8.2 have already kernel 6.8 as default, 8.1 have 6.5 as default and after upgrade you can pin kernel 6.5 (I did on one system with LSI MegaRAID SAS 2008 controller with this issue)
2. small note: zfs and btrfs is not good for vm disks if you use customers disks, can be ok only for system, anyway if you want HW raid point 2 is not applicable
3. FWIK there wasn't a fix or workaround applied in recent proxmox kernel so should be applicable, unless you have a different problem.
 
Last edited:
  • Like
Reactions: LkS45
Well, i tried:
1. Make the new installation with the version 8.1l, worked
2. Dist-upgrade
3. Reboot and now the server died, i got on busybox shell

o_O
 
I suppose you didn't pin to boot with 6.5 after upgrade but after reboot boot on kernel 6.8 and also without add kernel parameters of point 3 (related to iommu). So is it normal it don't view the disks and fallback to busybox.

Reboot again and on grub menu edit the entry and try to add the kernel parameters of point 3 (based on your cpu, intel or amd), if works put them as permanent in grub config so you can continue to use latest kernel version.
If this doesn't work, select 6.5 kernel entry on grub and after pin it to make it as default.
 
I suppose you didn't pin to boot with 6.5 after upgrade but after reboot boot on kernel 6.8 and also without add kernel parameters of point 3 (related to iommu). So is it normal it don't view the disks and fallback to busybox.

Reboot again and on grub menu edit the entry and try to add the kernel parameters of point 3 (based on your cpu, intel or amd), if works put them as permanent in grub config so you can continue to use latest kernel version.
If this doesn't work, select 6.5 kernel entry on grub and after pin it to make it as default.
# Edit the file /etc/default/grub
nano /etc/default/grub
# change the variable GRUB_CMDLINE_LINUX_DEFAULT from GRUB_CMDLINE_LINUX_DEFAULT="quiet" to
# For AMD CPU
GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt"
# For INTEL CPU
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt"

# Then Update Grub and reboot
update-grub
reboot
 
Last edited:
  • Like
Reactions: Fantu
Hi Just a followup on this thread, I was kicking away at this server recently, a few weeks ago
tried to boot newer kernel after doing updates to proxmox 8.x.latest > so it is now proxmox 8.4.11 version
and the latest kernel it has available is kernel 6-8-12-13 I believe
so > it seemed to boot ok and I left it that way for a while > then had issues with machine locking up
rebooted - ok - again lock up
so today rebooted, observe console, try to use intel iommu off stanza - it was absent after my updates I guess.
various confusing - not so clear - errors to console approx thus

Code:
PTE read access not set
DRHD handling fault status reg 2
dmar_fault 1710 callbacks suppressed
DMAR DRHD read no_PASID req dev 05:00.0 fault ..
fault reason 0x06 PTE read access is not set

and after 5min or so of this it dumped to a non-boot grub sad status


after that I went back to booting my older kernel, /boot/vmlinuz-5.15.158-2-pve
and pinned that - it boots up pretty quick and painless, with this older kernel, and the things seems functional
so. I get the feeling still, there are some changes on megarid driver maybe - between the 5.15 kernel and the 6.8 kernel
and this is not playing nice on my megaraid controller - so life is fussy.

I'll see if this thing runs solid for ~1 week or more - and loop back to the thread then. But big picture I know the server was quite stable from ~Late-March to ~mid-Aug.2025 when I just left it alone on the old kernel. And things have been drama only in the last ~3 weeks since I started trying to muck around with 'gosh better get this up to date etc'. fun.

don't know if anyone else out there is going to recognize any of this / if there are other folks still with ~some legacy megaraid hardware raid controllers in service.

I did do a dig-in last week and double-checked that hard dirve appear to be healthy, smart status is good, I do not have bad disk / fail status or anything like that.

so. just a small update for 'maybe of use to someone'. Possibly me in future, or not, we shall see.


Tim