cannot mount lvm storageprox after upgrade

Jul 1, 2020
5
2
8
55
Hi,

after updating my proxmox-server (to Kernel 5.4.44.1?) the proxmoxx server cannot mount the lvm volume torageprox anymore after a reboot.
I get the following error:
proxmox.JPG
How can I regain access to my lvm container?

If I try to mount the volume with:
mount -a
I get the following error:

mount: /mnt/data: can't find LABEL=storageprox.

/etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
LABEL=storageprox /mnt/data ext4 defaults 0 2


pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1







/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

dir: Backup
path /mnt/data/backup
content backup,images
maxfiles 2
nodes pve
shared 0

dir: HDD-VM
path /mnt/data/img-vm
content iso,images,vztmpl,rootdir
nodes pve
shared 0


lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 446.6G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.3G 0 lvm
│ └─pve-data-tpool 253:4 0 320G 0 lvm
│ ├─pve-data 253:5 0 320G 0 lvm
│ ├─pve-vm--100--cloudinit 253:6 0 4M 0 lvm
│ ├─pve-vm--106--disk--0 253:7 0 32G 0 lvm
│ ├─pve-vm--106--cloudinit 253:8 0 4M 0 lvm
│ ├─pve-vm--119--disk--0 253:9 0 32G 0 lvm
│ └─pve-vm--117--disk--0 253:10 0 32G 0 lvm
└─pve-data_tdata 253:3 0 320G 0 lvm
└─pve-data-tpool 253:4 0 320G 0 lvm
├─pve-data 253:5 0 320G 0 lvm
├─pve-vm--100--cloudinit 253:6 0 4M 0 lvm
├─pve-vm--106--disk--0 253:7 0 32G 0 lvm
├─pve-vm--106--cloudinit 253:8 0 4M 0 lvm
├─pve-vm--119--disk--0 253:9 0 32G 0 lvm
└─pve-vm--117--disk--0 253:10 0 32G 0 lvm
sdb 8:16 0 1.1T 0 disk
sdc 8:32 0 1.1T 0 disk
sdd 8:48 0 1.1T 0 disk
sde 8:64 0 1.1T 0 disk

lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
base-100-disk-0 pve Vri---tz-k 32.00g data
data pve twi-aotz-- <320.04g 15.79 1.24
[data_tdata] pve Twi-ao---- <320.04g
[data_tmeta] pve ewi-ao---- <3.27g
[lvol0_pmspare] pve ewi------- <3.27g
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-cloudinit pve Vwi-a-tz-- 4.00m data 0.00
vm-106-cloudinit pve Vwi-a-tz-- 4.00m data 9.38
vm-106-disk-0 pve Vwi-a-tz-- 32.00g data base-100-disk-0 42.29
vm-117-disk-0 pve Vwi-a-tz-- 32.00g data 57.26
vm-119-disk-0 pve Vwi-a-tz-- 32.00g data 53.54
root@pve:/etc/pve#
 
Hi,

If I start the machine with an older Kernel (PVE 5.3.10-1) than I have no problems.
But I would like to use the newest kernel. So it would be important to me (and others) to fix this problem.
 
Hi,

I figured out that the problem has to do with my software raid volume.
This volume cannot get mounted during boot and that prevents correct start of the kernel.
Is software Raid removed from the kernel 5.4?
 
Can you explain how you resolved this in more detail? I'm receiving this same issue on my Proxmox!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!