I use Proxmox 7.4-1 (running kernel: 5.15.108-1-pve) installed on LSI MegaRAID SAS 9240-8i with 2 disk and all work fine.
After install new version Proxmox 8.2 on SSD disk. I dont see disks connected to RAID controller.
I try reboot in old system and disks sda on the RAID controller are available.
sda - virtual disk on RAID, sdb - new ssd disk
Im change boot device to ssd with new install Proxmox 8.2, and the disks on the RAID controller are not available only SSD sda.
I dont see my RAID massive.
I try:
What can I try to restore disk access on the RAID controller with Proxmox 8.2?
Code:
02:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 2008 [Falcon] (rev 03)
Subsystem: Broadcom / LSI MegaRAID SAS 9240-8i
Flags: bus master, fast devsel, latency 0, IRQ 17
I/O ports at d000 [size=256]
Memory at f7460000 (64-bit, non-prefetchable) [size=16K]
Memory at f7400000 (64-bit, non-prefetchable) [size=256K]
Expansion ROM at f7440000 [disabled] [size=128K]
Capabilities: [50] Power Management version 3
Capabilities: [68] Express Endpoint, MSI 00
Capabilities: [d0] Vital Product Data
Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [138] Power Budgeting <?>
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
Code:
[ 1.067096] megaraid_sas 0000:02:00.0: BAR:0x1 BAR's base_addr(phys):0x00000000f7460000 mapped virt_addr:0x00000000041cbb5a
[ 1.067100] megaraid_sas 0000:02:00.0: FW now in Ready state
[ 1.067102] megaraid_sas 0000:02:00.0: 63 bit DMA mask and 32 bit consistent mask
[ 1.068725] megaraid_sas 0000:02:00.0: requested/available msix 1/1 poll_queue 0
[ 1.068727] megaraid_sas 0000:02:00.0: current msix/online cpus : (1/4)
[ 1.068729] megaraid_sas 0000:02:00.0: RDPQ mode : (disabled)
[ 1.150622] megaraid_sas 0000:02:00.0: controller type : iMR(0MB)
[ 1.150625] megaraid_sas 0000:02:00.0: Online Controller Reset(OCR) : Enabled
[ 1.150626] megaraid_sas 0000:02:00.0: Secure JBOD support : No
[ 1.150627] megaraid_sas 0000:02:00.0: NVMe passthru support : No
[ 1.150628] megaraid_sas 0000:02:00.0: FW provided TM TaskAbort/Reset timeout : 0 secs/0 secs
[ 1.150629] megaraid_sas 0000:02:00.0: JBOD sequence map support : No
[ 1.150630] megaraid_sas 0000:02:00.0: PCI Lane Margining support : No
[ 1.150631] megaraid_sas 0000:02:00.0: megasas_init_mfi: fw_support_ieee=67108864
[ 1.150642] megaraid_sas 0000:02:00.0: INIT adapter done
[ 1.150643] megaraid_sas 0000:02:00.0: JBOD sequence map is disabled megasas_setup_jbod_map 5804
[ 1.214624] megaraid_sas 0000:02:00.0: pci id : (0x1000)/(0x0073)/(0x1000)/(0x9240)
[ 1.214627] megaraid_sas 0000:02:00.0: unevenspan support : no
[ 1.214628] megaraid_sas 0000:02:00.0: firmware crash dump : no
[ 1.214629] megaraid_sas 0000:02:00.0: JBOD sequence map : disabled
[ 1.214631] megaraid_sas 0000:02:00.0: Max firmware commands: 30 shared with default hw_queues = 1 poll_queues 0
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""
I try reboot in old system and disks sda on the RAID controller are available.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 60G 0 loop
loop1 7:1 0 8G 0 loop
sda 8:0 0 930,4G 0 disk
├─sda1 8:1 0 400G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 9,8G 0 part [SWAP]
sdb 8:16 0 476,9G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 475,9G 0 part
├─pve-swap 253:0 0 8G 0 lvm
├─pve-root 253:1 0 96G 0 lvm
├─pve-data_tmeta 253:2 0 3,6G 0 lvm
│ └─pve-data-tpool 253:4 0 348,8G 0 lvm
│ ├─pve-data 253:5 0 348,8G 1 lvm
│ ├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
│ ├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
│ ├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
│ ├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
│ ├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
│ └─pve-vm--101--disk--1 253:11 0 4M 0 lvm
└─pve-data_tdata 253:3 0 348,8G 0 lvm
└─pve-data-tpool 253:4 0 348,8G 0 lvm
├─pve-data 253:5 0 348,8G 1 lvm
├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
└─pve-vm--101--disk--1 253:11 0 4M 0 lvm
loop0 7:0 0 60G 0 loop
loop1 7:1 0 8G 0 loop
sda 8:0 0 930,4G 0 disk
├─sda1 8:1 0 400G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 9,8G 0 part [SWAP]
sdb 8:16 0 476,9G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 475,9G 0 part
├─pve-swap 253:0 0 8G 0 lvm
├─pve-root 253:1 0 96G 0 lvm
├─pve-data_tmeta 253:2 0 3,6G 0 lvm
│ └─pve-data-tpool 253:4 0 348,8G 0 lvm
│ ├─pve-data 253:5 0 348,8G 1 lvm
│ ├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
│ ├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
│ ├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
│ ├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
│ ├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
│ └─pve-vm--101--disk--1 253:11 0 4M 0 lvm
└─pve-data_tdata 253:3 0 348,8G 0 lvm
└─pve-data-tpool 253:4 0 348,8G 0 lvm
├─pve-data 253:5 0 348,8G 1 lvm
├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
└─pve-vm--101--disk--1 253:11 0 4M 0 lvm
Im change boot device to ssd with new install Proxmox 8.2, and the disks on the RAID controller are not available only SSD sda.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 475.9G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.6G 0 lvm
│ └─pve-data-tpool 253:4 0 348.8G 0 lvm
│ ├─pve-data 253:5 0 348.8G 1 lvm
│ ├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
│ ├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
│ ├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
│ ├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
│ ├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
│ └─pve-vm--101--disk--1 253:11 0 4M 0 lvm
└─pve-data_tdata 253:3 0 348.8G 0 lvm
└─pve-data-tpool 253:4 0 348.8G 0 lvm
├─pve-data 253:5 0 348.8G 1 lvm
├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
└─pve-vm--101--disk--1 253:11 0 4M 0 lvm
sda 8:0 0 476.9G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 475.9G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.6G 0 lvm
│ └─pve-data-tpool 253:4 0 348.8G 0 lvm
│ ├─pve-data 253:5 0 348.8G 1 lvm
│ ├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
│ ├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
│ ├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
│ ├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
│ ├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
│ └─pve-vm--101--disk--1 253:11 0 4M 0 lvm
└─pve-data_tdata 253:3 0 348.8G 0 lvm
└─pve-data-tpool 253:4 0 348.8G 0 lvm
├─pve-data 253:5 0 348.8G 1 lvm
├─pve-vm--102--disk--0 253:6 0 60G 0 lvm
├─pve-vm--106--disk--0 253:7 0 8G 0 lvm
├─pve-vm--103--disk--0 253:8 0 52G 0 lvm
├─pve-vm--100--disk--0 253:9 0 32G 0 lvm
├─pve-vm--101--disk--0 253:10 0 127G 0 lvm
└─pve-vm--101--disk--1 253:11 0 4M 0 lvm
I try:
- add in /etc/kernel/cmdline pci=realloc=off then proxmox-boot-tool refresh reboot
- add in /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on pci=realloc=off” then update-grub reboot
- change BIOS settings> CSM > storage TO BE UEFI ONLY and ONLY LEGACY
- change kernel
- apt install pve-kernel-6.1
- proxmox-boot-tool kernel add 6.1.10-1-pve
- proxmox-boot-tool kernel pin 6.1.10-1-pve
- reboot
What can I try to restore disk access on the RAID controller with Proxmox 8.2?
Attachments
Last edited: