Hi,
after upgrading from pve5 to pve6 and therefore the kernel step to 5.4.44 boot is stuck at initramfs because the lvm disks wont be discovered anymore.
Suspected the Areca RAID controller as the problem, but it is supported and included in kernel 5.4 as it was before.
Updated the RAID Controller to current FW, updated the BIOS etc.
Server is a Supermicro with X10SRL-F board and Areca 1203 8 port RAID controller.
rootdelay was set to at least 10 seconds, which changes nothing.
Old kernel 4.15.18-28 works without problems.
Pretty much out of ideas here.
Appreciate any ideas where to look next.
Thanks and best regards
after upgrading from pve5 to pve6 and therefore the kernel step to 5.4.44 boot is stuck at initramfs because the lvm disks wont be discovered anymore.
ALERT! /dev/mapper/pve-root does not exist
Suspected the Areca RAID controller as the problem, but it is supported and included in kernel 5.4 as it was before.
modinfo
returns the correct driver (version 1.40) and 1203 seems to still be included.
Bash:
filename: /lib/modules/5.4.44-1-pve/kernel/drivers/scsi/arcmsr/arcmsr.ko
version: v1.40.00.10-20190116
license: Dual BSD/GPL
description: Areca ARC11xx/12xx/16xx/188x SAS/SATA RAID Controller Driver
author: Nick Cheng, C.L. Huang <support@areca.com.tw>
srcversion: 2AC00BF99B6869D9C7844F6
alias: pci:v000017D3d00001884sv*sd*bc*sc*i*
alias: pci:v000017D3d00001880sv*sd*bc*sc*i*
alias: pci:v000017D3d00001681sv*sd*bc*sc*i*
alias: pci:v000017D3d00001680sv*sd*bc*sc*i*
alias: pci:v000017D3d00001381sv*sd*bc*sc*i*
alias: pci:v000017D3d00001380sv*sd*bc*sc*i*
alias: pci:v000017D3d00001280sv*sd*bc*sc*i*
alias: pci:v000017D3d00001270sv*sd*bc*sc*i*
alias: pci:v000017D3d00001260sv*sd*bc*sc*i*
alias: pci:v000017D3d00001230sv*sd*bc*sc*i*
alias: pci:v000017D3d00001220sv*sd*bc*sc*i*
alias: pci:v000017D3d00001214sv*sd*bc*sc*i*
alias: pci:v000017D3d00001210sv*sd*bc*sc*i*
alias: pci:v000017D3d00001203sv*sd*bc*sc*i*
alias: pci:v000017D3d00001202sv*sd*bc*sc*i*
alias: pci:v000017D3d00001201sv*sd*bc*sc*i*
alias: pci:v000017D3d00001200sv*sd*bc*sc*i*
alias: pci:v000017D3d00001170sv*sd*bc*sc*i*
alias: pci:v000017D3d00001160sv*sd*bc*sc*i*
alias: pci:v000017D3d00001130sv*sd*bc*sc*i*
alias: pci:v000017D3d00001120sv*sd*bc*sc*i*
alias: pci:v000017D3d00001110sv*sd*bc*sc*i*
depends:
retpoline: Y
intree: Y
name: arcmsr
vermagic: 5.4.44-1-pve SMP mod_unload modversions
parm: msix_enable:Enable MSI-X interrupt(0 ~ 1), msix_enable=1(enable), =0(disable) (int)
parm: msi_enable:Enable MSI interrupt(0 ~ 1), msi_enable=1(enable), =0(disable) (int)
parm: host_can_queue: adapter queue depth(32 ~ 1024), default is 128 (int)
parm: cmd_per_lun: device queue depth(1 ~ 128), default is 32 (int)
parm: dma_mask_64: set DMA mask to 64 bits(0 ~ 1), dma_mask_64=1(64 bits), =0(32 bits) (int)
parm: set_date_time: send date, time to iop(0 ~ 1), set_date_time=1(enable), default(=0) is disable (int)
Updated the RAID Controller to current FW, updated the BIOS etc.
Server is a Supermicro with X10SRL-F board and Areca 1203 8 port RAID controller.
Bash:
proxmox-ve: 6.2-1 (running kernel: 4.15.18-28-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-4.15: 5.4-17
pve-kernel-4.15.18-28-pve: 4.15.18-56
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
rootdelay was set to at least 10 seconds, which changes nothing.
lvm vgchange -ay
in initramfs changes nothing, volume groups wont be discovered at all.dmesg
doesn't give anything helpful.Old kernel 4.15.18-28 works without problems.
Pretty much out of ideas here.
Appreciate any ideas where to look next.
Thanks and best regards