I'm having the same issue, did you found any solution?
EDIT - it seems that this is corosync limitation, you need to restart corosync on all nodes when adding new node to cluster using udpu
Great, thanks. I was stuck with 1.9 for long time, but now we could finally upgrade to 3.0 and I'm really impressed by the progress you made with it.
If anyone can answer what are those cases when CLVM is needed it would be great.
Proxmox 2.0 beta3 changelog states:
"do not activate clvmd by default (clvm is now only needed in very special cases)"
what are those special cases? I couldn't find any description on the wiki, bugzilla or in this forum, only comments like "not needed for most users".
I'm using FC based...
I just installed proxmox 3.0 and it seems that memory graph is using 1000 instead of 1024 as base, I have max memory set to 4G but the line on graph is above the 4G mark, same with current used memory.
Is it just me?
ps. storage graphs also looks like it's using 1000 instead of 1024
If You use shared storage of any kind, then You can restart virtual machines from failed host on another server, all You need to do is to backup /etc/qemu-server directory which contains virtual machine configs. There is not HA in proxmox that will do it automatically.
Desktop drives don't support some features that are needed when disk is used in hardware RAID, see:
http://www.spinics.net/lists/xfs/msg03730.html
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
Proxmox is using 8 node cluster with HP DL180G5, each with 1TB 7200RPM SATA drives (no RAID volumes, each disk is single volume).
MFS splits files (vm images in this case) into 64MB chunks that are spread across mfs cluster, each chunk is replicated according to goal that was set for given file...
Metadata is not the only thing that affects performance and I had bad experience with glusterfs (some time ago, around ~v2.0, it may be different now). No dedicated central server may seem like a good think but it also creates problems:
- split-brain may occur and it may lead to data loss, it...
Why do You think that "no use of metadata" gives You best performance?
I use http://www.moosefs.org/ and I can recommend it, it's easy to set up and quite fast (I'm getting 50-60MB/s for reads on vm).
You got something for free from the internet, it did broke and instead of fixing it you are complaining that it did broke in the first place. Don't have skills needed to fix it? Pay the proxmox guys for support, that's even easier than "going back to citrix"
KVM exposes by default only subset of cpu features that are supported by most of cpu versions that support virtualization so that after You live-migrate VM from one host to another it does still work on new host cpu. If You got same cpu on every host You can enable all the feature or only those...
I've resized one on logical volumes using lvextend, You can't do that if in case of lvm on shared storage connected to proxmox.
see http://forum.proxmox.com/threads/2744-ProxMox-on-a-direct-attached-shared-storage-system?p=15240#post15240
I did everything using proxmox tools and still I had problems. Before that I had one cluster with lvm on top of FC but I broke it with lvextend, so I was very careful not to do any manual "tweaking" and left everything to proxmox.
I've had identical problem on two clusters in two separate data centers, one was using FC storage and multipath, other iscsi first without multipath, and then I've added multipath to protect myself from iscsi reconnects (one reconnect changed iscsi disk name from sda to sdb and my lvm stopped...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.