OpenVZ Online Container Migration Fails

Rrpuglisi: what performance do you see in GFS2? Could you run some simple tests (of your choice)? I'm experimenting with GFS2 to use with a 2-node cluster and live migration. Just 2 PM 2.1 installs in VMWare, so far it seems to be working, but can't really see the performance of a live system. I'm not turning off fencing though.
 
Please help!

I want make gfs2 on my shared SAN.

root@PVE-N2:~# pveversion -v
pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-15
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

root@PVE-N2:~# cman_tool
cman_tool: no operation specified
root@PVE-N2:~# cman_tool status
Version: 6.2.0
Config Version: 3
Cluster Name: XYZ
Cluster Id: 549
Cluster Member: Yes
Cluster Generation: 164
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 6
Flags:
Ports Bound: 0 11
Node name: PVE-N2
Node ID: 2
Multicast addresses: 239.192.2.39
Node addresses: 192.168.44.6

clvmd started, fence_tool join, /etc/default/redhat-cluster-pve FENCE_JOIN "yes", lvmconf --enable-cluster
/dev/mapper/mpath0 allow in filter in lvm.conf

i do pvcreate first on one node. pvcreate /dev/mapper/mapth0 successfully.
after this i do vgcreate store1_vg /dev/mapper/mpath0 all fine
but then i do lvcreate -l 100%FREE -n lvstore store1_vg i have error that vg store1_vg not found.

root@PVE-N1:~#vgremove store1_vg also bad -> store1_vg not found

root@PVE-N1:~#pvremove /dev/mapper/mpath0 --> PV /dev/mapper/mpath0 belongs to Volume Group store1_vg so please use vgreduce first.

after this i shutdown first node and try create pv vg lv on second node.

root@PVE-N2:~# pvcreate /dev/mapper/mpath0
Can't initialize physical volume "/dev/mapper/mpath0" of volume group "store1_vg" without -ff

root@PVE-N2:~# pvcreate -ff /dev/mapper/mpath0
Really INITIALIZE physical volume "/dev/mapper/mpath0" of volume group "store1_vg" [y/n]? y
WARNING: Forcing physical volume creation on /dev/mapper/mpath0 of volume group "store1_vg"
Writing physical volume data to disk "/dev/mapper/mpath0"
Physical volume "/dev/mapper/mpath0" successfully created

ok!

i try to create new vg

root@PVE-N2:~# vgcreate -c y -v storevg /dev/mapper/mpath0
Adding physical volume '/dev/mapper/mpath0' to volume group 'storevg'
Archiving volume group "storevg" metadata (seqno 0).
Creating volume group backup "/etc/lvm/backup/storevg" (seqno 1).
Clustered volume group "storevg" successfully created

I think thats ok ! but ...


root@PVE-N2:~# vgscan
Reading all physical volumes. This may take a while...

root@PVE-N2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 277.96g 16.00g

Where my new vg storevg ?? o_O

try create lv
root@PVE-N2:~# lvcreate -l 100%FREE -n storelv storevg
Volume group "storevg" not found

root@PVE-N2:~# pvremove /dev/mapper/mpath0
PV /dev/mapper/mpath0 belongs to Volume Group storevg so please use vgreduce first.
(If you are certain you need pvremove, then confirm by using --force twice.)

root@PVE-N2:~# vgreduce -f storevg /dev/mapper/mpath0
Volume group "storevg" not found
cluster request failed: Invalid argument
Internal error: Attempt to unlock unlocked VG storevg.


root@PVE-N2:~# pvcreate /dev/mapper/mpath0
Can't initialize physical volume "/dev/mapper/mpath0" of volume group "storevg" without -ff



What i do wrong?? Please HELP !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!