Proxmox 2.0 Beta HA Config validation error

argonius

Renowned Member
Jan 17, 2012
48
1
71
Hi,

i've upgrade to the newest beta and tried to test HA Failover.
I've created a simple kvm vm with id 100. disk images resides on a shared glusterfs storage
and is accessible on node1 and node2.
after that i've gone to Datacenter->HA and created an HA for this vm. then i clicked commit
and received:
config validation failed: unknown error (500)

i don't know how to handle this error message :(

below some version informations:

# pveversion -v
pve-manager: 2.0-18 (pve-manager/2.0/16283a5a)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 2.0-55
pve-kernel-2.6.32-6-pve: 2.6.32-55
lvm2: 2.02.88-2pve1
clvm: 2.02.88-2pve1
corosync-pve: 1.4.1-1
openais-pve: 1.1.4-1
libqb: 0.6.0-1
redhat-cluster-pve: 3.1.8-3
pve-cluster: 1.0-17
qemu-server: 2.0-13
pve-firmware: 1.0-14
libpve-common-perl: 1.0-11
libpve-access-control: 1.0-5
libpve-storage-perl: 2.0-9
vncterm: 1.0-2
vzctl: 3.0.29-3pve8
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-1
ksm-control-daemon: 1.1-1


Pending Changes in Proxmox:

- <cman keyfile="/var/lib/pve-cluster/corosync.authkey">- </cman>-+<cluster config_version="3" name="Cluster1">+ <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/> <clusternodes>- <clusternode name="proxmox1" votes="1" nodeid="1"/>- <clusternode name="proxmox2" votes="1" nodeid="2"/></clusternodes>-+ <clusternode name="proxmox1" nodeid="1" votes="1"/>+ <clusternode name="proxmox2" nodeid="2" votes="1"/>+ </clusternodes>+ <rm>+ <pvevm autostart="1" vmid="100"/>+ </rm> </cluster>
 
See http://pve.proxmox.com/wiki/High_Availability_Cluster

I doubt that gluster is suitable for storing disk images and HA, at least I personally never used it and so far no one reported this setup as fast and reliable against failures. but some reported the opposite.

As a general rule, always use stable and well tested storage technologies for HA, and gluster is not on my personal list.
 
See http://pve.proxmox.com/wiki/High_Availability_Cluster

I doubt that gluster is suitable for storing disk images and HA, at least I personally never used it and so far no one reported this setup as fast and reliable against failures. but some reported the opposite.

As a general rule, always use stable and well tested storage technologies for HA, and gluster is not on my personal list.



hi,

thanks for fast reply. I can not follow why glusterfs should not be stable. we are using it in many productive setups w/o problems :/
i figured out my problem. i still missed pve-resource-agents package. after installing it seems to work finde now ;)

thx

argonius
 
hi,

thanks for fast reply. I can not follow why glusterfs should not be stable. we are using it in many productive setups w/o problems :/

can you provide details about your gluster setup? any performance benchmarks for your virtual disks stored on gluster? on Proxmox VE, its not a "stable and widely used" storage technology, also not on other virtualization platforms - but I really like to see good working systems with gluster, but so far no one showed one to me.

i figured out my problem. i still missed pve-resource-agents package. after installing it seems to work finde now ;)

thx

argonius

great, thanks for feedback - and as far as I see this already documented in the wiki.
 
can you provide details about your gluster setup? any performance benchmarks for your virtual disks stored on gluster? on Proxmox VE, its not a "stable and widely used" storage technology, also not on other virtualization platforms - but I really like to see good working systems with gluster, but so far no one showed one to me.



great, thanks for feedback - and as far as I see this already documented in the wiki.


if i finished my setup to a working one, i will provide you with data ;)
i am still struggeling with rgmanager. also i am getting a segfault:

# ccs_tool lsnode -v

Cluster name: Cluster1, config_version: 3


Nodename Votes Nodeid Fencetype
proxmox1 1 1
Segmentation fault

looks like fencing is really needed, so i will try to setup first fencing with SuperMicro ipmi and looking forward ;)

greetz
argonius
 
argonius,

Do you have any benchmarks you can provide? I've been considering a few different distributed storage technologies, among them Gluster, Sheepdog and Ceph and it would interesting to know what your experience has been with Gluster so far.