here is one of the errors example. its like files that debian installer copies to the virtual disk that is located on glusterfs storage are getting corrupted.
in-target is /dev/vda1
No, it is not I210 issue.
I'm still not able to install jessie as VM on proxmox. Upgraded glusterfs to 3.6.4, still same problem on every proxmox node and does not matter really is it HA or DISTR volume. Today installation of Ubuntu 14.04 LTS with Mate went just fine. So it seems like Debian8...
Okay,
It seems that only nodes with I210 are effected. But why only with D8? And there is lots of stuff about this NIC and igb driver. Going to investigate it further.
Any comments?
not to discuss here, but there is no anything like gluster or ceph production ready. These technologies are young. So at this point I would say, that gluster is mature enough if you exactly know, what you need.
May be ceph got some advantages, but it is definitely much more slower. So just have...
Me and some guys from glusterfs community think that it is D8 netwroking drivers related. Timing or smth else has changed. Maybe there is some tuning available later, may be some fixes.
ah no.
this thing is just so random. got installed without problems just few sec-s ago on the storage I had problems installing before...
have to wait for 8.1
seems like I found the reason.
On the working proxmox node I've got glusterfs client v 3.5.2 on the not working one - 3.5.3
as soon as I updated 3.5.2 to 3.5.3 I've got the same problem
I'll report to gluterfs devs.
I'm not reporting anything :) This is a community forum, isn't it? I'm just asking for advice or experience from community members and may be from proxmox devs (your experience is priceless here).
So just for memo, I'm not arguing, am just sharing my experience and willing to know, if someone...
same from me - as everything else (including debian7, centos, ubuntu) works on my setup well, it cannot be my setup. I'm not running my setup for first day. Got a lot of VM-s there. Its just some debian drivers bug.
give glusterfs a try with deb8 raw and qcow files.
here is the error when using...
vzquota drop $VMID
helps
result:
the debian template from proxmox gui is broken.
summary:
If you want to move CTs storage to other place (gluster, other dir, other disk or smth), do not use CT templates from Proxmox GUI. Download them from OpenVZ site, they work fine without any error.
Summary of this test, if someone lost during reading:
Using raw file format as disk type on glusterfs storage
1. debian 7 installs good and fast
2. debian 8 install takes ages (6 hours from netinstall image)
Using qcow file format as disk type on glusterfs storage
1. debian 7 instalsl good...
Well, don't actually believe, that there is a problem with my glusterfs setup. Seems like a problem with Debian8 virtio drivers. Now I've chosen qcow format, start was pretty good, but as soon as I reach the network mirror choice step, it is not possible to continue. It does not matter what...
Really, it goes ok with local storage.
But what would be the problem? Am running default stable glusterfs, have no problems with raw with debian7 installation. but it is slow with debian8 ..
any hints for glusterfs settings?
Small update:
after installation I've made a template of installed Jessie and then made a clone, using qcow file format. Now it runs smoothly.
Any problems with raw file storage?
I can confirm, that after REALLY slow installation the VM runs ok. If I download some files over Internet, am getting about 1Gbps speed, so there is no problem between proxmox and gluster. But, as soon as I install something from repos - it goes just very slow. It downloads pretty fast, but...
/?Hi,
Have anyone tried to install debian 8 VM on proxmox ? I'm trying to install using glusterfs volume as storage and have no luck
1. if I use qcow format, than installation never goes to its end or (if it installs fine) I'm getting erros on boot (like can't load module ext4) and not able to...
Hm.
I've downloaded a template from openvz site and it is working, but still df -h shows wrong sizes.
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 4.0G 128K 4.0G 1% /
none 128M 4.0K 128M 1% /dev
none 26M 1.0M 25M 4% /run
none...
and also I've got these strange things if I run CT on different storage location...:
root@test:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 4.0G 128K 4.0G 1% /
root@test:/# ifup eth0
run-parts: failed to open directory /etc/network/if-pre-up.d...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.