Updated gluster possible?



any fixed .deb is available?

you test with latest pvetest packages from today? update and test again. works in my tests.
 
No, I use the one without subscription.
pvetest repo offers too many updates. Should I install all of them or only few of them, then which ones?

The following NEW packages will be installed:
ipset libgoogle-perftools4 libipset2 libiscsi4 libmime-base32-perl libmnl0 libnetfilter-log1 liboath0 libtcmalloc-minimal4 libunwind7 novnc-pve oathtool pve-firewall pve-kernel-2.6.32-32-pve python-suds
The following packages will be upgraded:
ceph-common corosync-pve fence-agents-pve libcorosync4-pve libpve-access-control libpve-common-perl libpve-storage-perl librados2 librbd1 proxmox-ve-2.6.32 pve-cluster pve-manager pve-qemu-kvm
python-ceph qemu-server vncterm vzctl
17 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
 
figured out, it is only up to pve-qemu-kvm.
Can confirm - works like a charm. Thank you.
 
Thanks Tom, this is great to hear. I continued working on 3.4.2-1 (with the annoying bad heal info bug) but I am extremely eager to get on 3.5.2 if QM start works properly now. I can't test at work easily because the proxmox 3.2 kernel does not support nested virtualization so I'll take a crack at it at home tonight.

Do I need to use the full test repo? Or can I just install the pieces required and if so what are they? I would assume

http://download.proxmox.com/debian/...nary-amd64/glusterfs-client_3.5.2-1_amd64.deb
http://download.proxmox.com/debian/...nary-amd64/glusterfs-server_3.5.2-1_amd64.deb
http://download.proxmox.com/debian/dists/wheezy/pvetest/binary-amd64/pve-qemu-kvm_2.1-5_amd64.deb
http://download.proxmox.com/debian/dists/wheezy/pvetest/binary-amd64/qemu-server_3.1-33_amd64.deb

Thanks!
 
hey,

I've installed only updated pve-qemu-kvm. Works like a charm. Deploying glusterfs 3.5.2 just now :)
 
From gluster repos. And they did fix the healing daemon in 3.5.2 (it was not packaged for some reason, starting 3.5.1, so I've reported that). Also, I've reported that they had problems with libc version and it was not possible to install 3.5.2 on wheezy, now it is ok. It is now safe (as for me) to migrate to 3.5.2. The only one thing I've noticed - some strange things with logging :) After upgrade log rotation works bad. Logs are rotated, but glusterfsd and glusterfs keep writing to faillogname.log.1 not into new faillogname.log :(. Reported and waiting for solution. logrotate.d config files seem to be ok.
 
How did you get just pve-qemu-kvm installed? The deb requires libiscsi4, did you do an apt-get upgrade after adding pvetest?

(NM I figured it out was a silly mistake) -- just needed to run apt-get -f to fix --
 
I have two prod machines running on glusterfs 3.5.2 right now and everything appears to be ok. However, when I perform storage migration I get an error like this

[2014-09-09 18:16:19.844075] E [afr-common.c:4168:afr_notify] 0-gvms1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2014-09-09 18:16:20.000730] E [afr-common.c:4168:afr_notify] 0-gvms1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2014-09-09 18:16:20.549181] E [afr-common.c:4168:afr_notify] 0-gvms1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.

The migration proceeds to work immediately after and I have no issues with bringing servers down/up (healing works fine). Has anyone else seen this, maybe even know what it means?
 
I have just tried to move vm drive from nfs to glusterfs 3.5.2-1 and i can see exactly same messages. But unfortunately after this nothing is happening and the vm freezed. Before canceling the job I was able to see that image with correct size on my gluster storage, but no transfer was happening. Could it be problem that I run both glusterfs and nfs-server at the same time? When adding in GUI it offered me wrong directory, I had to type it manually. I can see in logs: [xlator.c:403:xlator_init] 0-nfs-server: Initialization of volume 'nfs-server' failed, review your volfile again
I was hoping that I can make a slow migration from nfs to glusterfs. Is here any workaround?
 
Agreed, everything seems ok. My best guess is that qm isn't waiting long enough for a response and then it works after. I currently have 30 vms running on a 3-way replica under 3.5.2 with the recompiled pve-qemu-kvm and everything is good. I can finally ask "gluster volume heal volname info" and get an _honest_ answer.
 
From gluster repos. And they did fix the healing daemon in 3.5.2 (it was not packaged for some reason, starting 3.5.1, so I've reported that). Also, I've reported that they had problems with libc version and it was not possible to install 3.5.2 on wheezy, now it is ok. It is now safe (as for me) to migrate to 3.5.2. The only one thing I've noticed - some strange things with logging :) After upgrade log rotation works bad. Logs are rotated, but glusterfsd and glusterfs keep writing to faillogname.log.1 not into new faillogname.log :(. Reported and waiting for solution. logrotate.d config files seem to be ok.

Have you upgraded to 3.3 and qemu 2.1.2 yet? If so are you encountering the same problem as I describe here?

I'm going to do a test build to see if I can pin point what is causing the issue as I did not change my version of gluster. Two suspects are kernel & qemu.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!