[SOLVED] LVM : Not Available at boot

vince_122

New Member
Apr 10, 2012
29
0
1
Hi there,

Since I've moved my storage over LVM + ISCI, I always get my lvm with status "NOT Available" when I'm rebooting my physical nodes.
I have to type the following command :

vgchange -a y

On the 3 physical nodes.

The storage.cfg looks like this :

iscsi: ISCSI-srv18
target iqn.2012-06.srv16:vsrv18
portal 10.10.10.16
content none
nodes srv18


iscsi: ISCSI-srv19
target iqn.2012-06.srv16:srv19
portal 10.10.10.16
content none
nodes srv19


iscsi: ISCSI-srv17
target iqn.2012-06.srv16:srv17
portal 10.10.10.16
content none
nodes srv17


lvm: VG-srv18
vgname VG-srv18
content images
nodes srv18


lvm: VG-srv17
vgname VG-srv17
content images
nodes srv17


lvm: VG-srv19
vgname VG-srv19
content images
nodes srv19

What can I do to get my LVM with status "Available" at boot ?

(pveversion -v) :

pve-manager: 2.2-24 (pve-manager/2.2/7f9cfa4c)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-80
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-1
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-28
qemu-server: 2.0-62
pve-firmware: 1.0-21
libpve-common-perl: 1.0-36
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-34
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1

Thanks,
Vince.
 
Please use the GUI to add the lvm storage (then it will include a reference to the base storage).
 
Hi !

Thanks for your answer.
I changed everything in my storage configuration, did like you told me (creating LVM VG throught GUI, and assigned to all nodes, with shared use enabled)

Here is my vgdisplay :

--- Volume group ---
VG Name QNAP-LVM-VG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 500.00 GiB
PE Size 4.00 MiB
Total PE 127999
Alloc PE / Size 122880 / 480.00 GiB
Free PE / Size 5119 / 20.00 GiB
VG UUID d8qsX0-vhw1-MPHf-a5j1-s9WY-mL6M-qHQerv

And my lvdisplay :

--- Logical volume ---
LV Path /dev/QNAP-LVM-VG/QNAP-vxplfsrv19_lv
LV Name QNAP-vxplfsrv19_lv
VG Name QNAP-LVM-VG
LV UUID Mdrg03-eUJA-8s9c-JX81-gFTJ-n0JM-5RibE9
LV Write Access read/write
LV Creation host, time vxplfsrv19, 2013-01-15 08:44:57 +0100
LV Status NOT available
LV Size 120.00 GiB
Current LE 30720
Segments 2
Allocation inherit
Read ahead sectors auto


--- Logical volume ---
LV Path /dev/QNAP-LVM-VG/QNAP-vxplfsrv200_lv
LV Name QNAP-vxplfsrv200_lv
VG Name QNAP-LVM-VG
LV UUID EZp8q4-d1HC-9EMq-WPIk-rEJq-5VUq-lpb4R8
LV Write Access read/write
LV Creation host, time vxplfsrv200, 2013-01-15 14:09:45 +0100
LV Status NOT available
LV Size 120.00 GiB
Current LE 30720
Segments 1
Allocation inherit
Read ahead sectors auto

As you can see, I only have 1 VG for all my nodes, and 1 LV for each node.
I created the LV with each node (we can see the LV Creaton host, time) but when I'm rebooting the node, I have to type :

vgchange -a y

Because all my LV are NOT available..

What do I have to do, to STICK the LV on the corresponding node ?

Thanks,
Vince.
 
Thanks for your answer,

But I'm using LVM to store my CT too !
When my physical node is starting, I'm typing "vzctl start <Some CT over lvm>", but the volume isn't automatically mounted..
 
CT on LVM are not a supported setup so we do not know how you do it and therefore we cannot tell whats wrong.
 
Yes mir, you're right !

I can't really understand why CT over LVM are not a supported setup (with ext3 over LVM of course).
Is this because of technical issues ? Or maybe the team is focused on another functions ?

I am writing a small script to enable my LV on the right node. If someone is interrested, I can paste it here.

Thanks,
Vince.
 
I can't really understand why CT over LVM are not a supported setup (with ext3 over LVM of course).

This is of cause supported. You just need to mount the volume with fstab, and enable autostart for the iscsi volume.
 
This is of cause supported. You just need to mount the volume with fstab, and enable autostart for the iscsi volume.

This is what I did, ISCSI is OK, but my problem is caused by LVM NOT Available at the node reboot.
So I edited fstab accordingly, but it can't mount my LV while it's NOT Available
 
Yes, my node automaticaly connect to iSCSI, and see the LV

I have 4 LV (for 4 nodes) created with each node, but the LV is marked as "NOT Available" on each time.

Thanks
 
No I didn't but this is not linked to the _netdev option because my LVM can be seen (iSCSI is already connected), but the LVM get the status NOT Available (I just have to type lvchange -a y [LV] on each node).

Here is my start script to make LVM Available :

#! /bin/sh### BEGIN INIT INFO
# Provides: activerLVM
# Required-Start: $local_fs $all stop-bootlogd
# Required-Stop:
# Default-Start: 2
# Default-Stop:
# Short-Description: Activer LVM
# Description:
### END INIT INFO


LVM=/dev/QNAP-LVM-VG/QNAP-vxplfsrv200_lv
NAME=activerLVM
ESSAIS=3
TOUR=1


case "$1" in
start)
while [ ! -b $LVM ]; do
echo "Activation de LVM, Veuillez patienter. $TOUR/$ESSAIS"
lvchange -a y $LVM
sleep 2
if [ $TOUR -eq $ESSAIS -a ! -b $LVM ] ; then
echo "Activation Echouee"
break
fi
TOUR=$((TOUR+1))
done
if [ -b $LVM ]; then
echo "Activation reussie ! Montage en cours"
mount -a
fi
;;
stop|restart|force-reload)
# No-op
;;
status)
# No-op
;;
*)
echo "Usage: $NAME {start}" >&2
exit 3
;;
esac
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!