clvmd not starting on boot

May 5, 2010
44
0
6
California, USA
Hello,

I was having problems with drdb failing to lock it's backing store because LVM was locking it first. I finally found out I needed to use cLVM with clustered locking. I changed my existing LVs to clustered LVs with vgchange -c y and everything is working great. Except when I reboot, clvmd does not start automatically. I have to type 'clvmd' otherwise vgscan returns errors saying failed to connect (to clvmd).

Any ideas? Does the conversion of VGs to clustered VGs not work well?

root@ironworks:~# pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-73
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-73
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-30
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1



--Will