Hi there,
Since I've moved my storage over LVM + ISCI, I always get my lvm with status "NOT Available" when I'm rebooting my physical nodes.
I have to type the following command :
On the 3 physical nodes.
The storage.cfg looks like this :
What can I do to get my LVM with status "Available" at boot ?
(pveversion -v) :
Thanks,
Vince.
Since I've moved my storage over LVM + ISCI, I always get my lvm with status "NOT Available" when I'm rebooting my physical nodes.
I have to type the following command :
vgchange -a y
On the 3 physical nodes.
The storage.cfg looks like this :
iscsi: ISCSI-srv18
target iqn.2012-06.srv16:vsrv18
portal 10.10.10.16
content none
nodes srv18
iscsi: ISCSI-srv19
target iqn.2012-06.srv16:srv19
portal 10.10.10.16
content none
nodes srv19
iscsi: ISCSI-srv17
target iqn.2012-06.srv16:srv17
portal 10.10.10.16
content none
nodes srv17
lvm: VG-srv18
vgname VG-srv18
content images
nodes srv18
lvm: VG-srv17
vgname VG-srv17
content images
nodes srv17
lvm: VG-srv19
vgname VG-srv19
content images
nodes srv19
What can I do to get my LVM with status "Available" at boot ?
(pveversion -v) :
pve-manager: 2.2-24 (pve-manager/2.2/7f9cfa4c)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-80
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-1
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-28
qemu-server: 2.0-62
pve-firmware: 1.0-21
libpve-common-perl: 1.0-36
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-34
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1
Thanks,
Vince.