OK,
as tests showed, the problem is only when I set performance.write-behind: on There are no problems with read-ahead on. Gluster devs said they will try to find out where exactly mistake is and fix, if it is possible to. They need to track the IO pattern during the installation process of D8...
not yet. I'm on vacation atm. After I'm done with it, I've got some tests to do requested by glusterfs devs. We've got to clear it out, which option of these two does not work right and then they can see if they could do something to translators or it's virtio or even Debian8 bug (as i mentioned...
Hi,
Got this problem: migrating some of my VM to other Node (not in a cluster) usin dump/restore method. But if I dump a VM, which has multiple disks on different storages, I end up with a single file (well, ok with that). When I try to restore, I can choose only one destination storage.
What...
Guys, its an epic win.
Yesterday had another advice from GlusterFS devs, the problem is solved after I added:
performance.write-behind: off
performance.read-ahead: off
to the volume config.
I'll populate recommended config for volumes and qemu as soon as I test them enough, so someone could...
Just to sum thing up:
D8 fails to install on glusterfs storage that is used by Proxmox qemu for VM storage.
Another affected Distros: Only D8 (D7, Ubuntu 14.04 LTS, Centos7, Centos6 tested and working)
D8 works if glusterfs is mounted via NFS, but this way libgfapi is not used !!
D8 works on...
meanwhile is there any chance to get updated kvm-qemu and qemu-server with current release? At least from pve-test repository? I've got one node I can restart pretty often, so can check it there (the most important, that node could boot with new changes :) )
toni.patroni
tnx for chilling me down...
tom, dietmar, community, I'm sorry about things I wrote in #8.
Those are just emotions and result of red-eyes-effect due to not sleeping a lot last 2 months trying to figure out, why only D8 installation fails with gluster devs and one of them is from US...
yes, i lose :) technically - not. But I can't restart servers that often. So i'd better wipe out partitions with new FS. All I'm trying to say is for me this time such huge change was not deadly, but could be for some1 else if ie root partition was under JFS. I do understand, that fully tested...
As this was our first node, we decided to use jfs there for local storage (backup and isos) and logs as it really pretty good FS, which uses very low cpu.
XFS is not a good option, as there are a lot of problems with it, when you use GlusterFS. The most popular is about time-out due to lock or...
Hi,
This thread is a result of my previous thread here http://forum.proxmox.com/threads/22142-debian-jessie-kvm-installation.
I crated new thread to get some attention, as there was not much activity on the previous one (devs attention, I mean of course :) )
So, I've been debugging this...
thanx.
these two seem to me weird:
performance.write-behind: off
performance.write-behind-window-size: 4MB
if its off why you specify its window-size?
but well, it does not matter anyway. Doesn't seem like some of these options could really affect something
cool. I had my backups, logs and templates with jfs mounted.
=( sad to loose data with simple apt-get dist-upgrade. I can't boot with older kernel, its semi-production server.
How your glusterfs is configured? you add storage via promox GUI or from console? I do not use caching for any VM and they run well.
But thanks for sharing your experience. Could you describe you setup please?
It seems it is proxmox or debian fault!
http://www.gluster.org/pipermail/gluster-users/2015-July/022804.html
other (ovirt ie) VEs run D8 without problems on glusterfs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.