Have you upgraded to 3.3 and qemu 2.1.2 yet? If so are you encountering the same problem as I describe here?
I'm going to do a test build to see if I can pin point what is causing the issue as I did not change my version of gluster. Two suspects are kernel & qemu.
A while back I worked with people on this forum and we eventually got a version of qemu 2.1.0 compiled over glusterfs version 3.5.2. You can see the history here:
http://forum.proxmox.com/threads/19102-Updated-gluster-possible
So until recently I was operating on the pve test 2.1.0 version of...
Thanks for your reply Loki. I'm going to assume that "migration_unsecure: 1" is the qemu parm. The purpose of the hpn-ssh patch is to make encrypted ssh traffic fast. I suspect a lot of proxmox users don't run their cluster over a dedicated network as the default install pretty much uses your...
Given that Proxmox performs live/storage migration over SSH I think this would be pretty significant
http://www.psc.edu/index.php/hpn-ssh
Any chance of a deb file for a compile using this patch? Our cluster is linked over 10gbe so getting 50-100 MBytes/sec for something like live migration was...
Agreed, everything seems ok. My best guess is that qm isn't waiting long enough for a response and then it works after. I currently have 30 vms running on a 3-way replica under 3.5.2 with the recompiled pve-qemu-kvm and everything is good. I can finally ask "gluster volume heal volname info" and...
I have two prod machines running on glusterfs 3.5.2 right now and everything appears to be ok. However, when I perform storage migration I get an error like this
[2014-09-09 18:16:19.844075] E [afr-common.c:4168:afr_notify] 0-gvms1-replicate-0: All subvolumes are down. Going offline until...
How did you get just pve-qemu-kvm installed? The deb requires libiscsi4, did you do an apt-get upgrade after adding pvetest?
(NM I figured it out was a silly mistake) -- just needed to run apt-get -f to fix --
Thanks Tom, this is great to hear. I continued working on 3.4.2-1 (with the annoying bad heal info bug) but I am extremely eager to get on 3.5.2 if QM start works properly now. I can't test at work easily because the proxmox 3.2 kernel does not support nested virtualization so I'll take a crack...
I'm not sure specifically where the problem is, but the symptoms are that QM reports a timeout even though the machine starts properly. This problem prevents live migration as well.
This post had something very similar (not the same error though)...
So here's my setup today
Node 1 (running glusterfs 3.4.2 and qemu 1.7.1 stock)
Node 2 (same as above)
Node 3 (running glusterfs 3.5.2 from gluster repo but qemu 1.7.1. stock)
My best guess is that qemu libgfapi points to the server (in my case I use localhost) because qemu on Node 1 & 2 can...
Thanks, as a virtual storage host we are very eager to move to 3.5.x because volume heal info just doesn't work in 3.4.x. The problem isn't that you may have compiled glusterfs-server (it's good to know you used stock debs) but that I suspect you must have compiled qemu (and it will need to be...
I stand corrected, I should have said "last time I used... etc"
I'm glad napp-it support is being moved to Linux. It looks like certain functions aren't and won't be ported to Linux http://www.napp-it.org/downloads/linux_en.htm
"Linux is not my preferred or main platform. Many napp-it...
My recommendation would be
1) ZoL on Proxmox native or exported
2) Gluster for live migration (or just use Storage migration)
3) If you use a SAN setup use 10Gbe or fiber
We run ZFS and have been in production for 2 years (since before KVM when we were on VMWare). Proxmox has an installation...
First off let me explain my setup briefly
We are running a 3-node (HA turned off presently) cluster. All three use a Gluster replicate volume for VM storage, the reason I chose proxmox is because (at the time of choosing) it was the only ui that supported libgfapi fully. Everything works...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.