HI!
I upgraded the test server from pve7 to pve8 - I found the following annoying bug:
> I issue the server shutdown - ( init 6 ), but the shutdown process is hanging and looping the following message:
"sytemd-shutdown[1] - Failed to get MD_LEVEL property for /dev/mdX, ignoring: No such file or directory"
The storage assembly on boot is done by a script due the complexity.
I'm using stacked/layered/nested storage:
1st layer : 24-bay disk shelf with a dual controller connected to the HBA card (HDDs with dual SAS connector)
2nd layer: Multipath is created to each HDD ( /dev/dm-0 , /dev/dm-1, ... )
3rd layer: "3way RAID-1" arrays created using mdadm ( 8x separated RAID-1 arrays each has 3x HDD )
4th layer: LVM is created (LVM striped) on the top of the RAID-1 arrays ( storage pool )
5th layer: EXT4 filesystem is created on top of the LVM
This setup is working and robust without error until Proxmox7, where "shutdown" is not working anymore and causes "degraded sate" of the array due the improper "shutdown".
I found the following about this:
https://github.com/canonical/probert/issues/125
According to this, it is an "udev or systemd" bug,
"udevadm does not show the MD_LEVEL information on an inactive RAID"
This is not fixed/patched in the Debian12 version?
Any help?
I upgraded the test server from pve7 to pve8 - I found the following annoying bug:
> I issue the server shutdown - ( init 6 ), but the shutdown process is hanging and looping the following message:
"sytemd-shutdown[1] - Failed to get MD_LEVEL property for /dev/mdX, ignoring: No such file or directory"
The storage assembly on boot is done by a script due the complexity.
I'm using stacked/layered/nested storage:
1st layer : 24-bay disk shelf with a dual controller connected to the HBA card (HDDs with dual SAS connector)
2nd layer: Multipath is created to each HDD ( /dev/dm-0 , /dev/dm-1, ... )
3rd layer: "3way RAID-1" arrays created using mdadm ( 8x separated RAID-1 arrays each has 3x HDD )
4th layer: LVM is created (LVM striped) on the top of the RAID-1 arrays ( storage pool )
5th layer: EXT4 filesystem is created on top of the LVM
This setup is working and robust without error until Proxmox7, where "shutdown" is not working anymore and causes "degraded sate" of the array due the improper "shutdown".
I found the following about this:
https://github.com/canonical/probert/issues/125
According to this, it is an "udev or systemd" bug,
"udevadm does not show the MD_LEVEL information on an inactive RAID"
This is not fixed/patched in the Debian12 version?
Any help?
Code:
Hardware:
HPE ML350p gen8
~$ pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-5.15: 7.4-3
pve-kernel-6.2.16-3-pve: 6.2.16-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx2
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.1
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
openvswitch-switch: 3.1.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.1
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.3
pve-docs: 8.0.3
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
Last edited: