Ok so i thought that it was buggy drbd9 but after 3 days of lost time i finally made it :)
Before that fix i had 20mbit/s syncing but after that i have 800mbit/s on 1GBit :)
NET | bond0 80% | pcki 13298 | pcko 133286 | si 3879 Kbps | so 807 Mbps | coll 0 | mlti 12 | erri...
Hi, i have strange issue.
All my interfaces on 2 nodes have 1GBit nic and they are connected at 1GBit in bond (active-backup) mode
I am using newest drbd9 in proxmox 4.2
Syncing is about 20mbit/s (should be like 800mbit)
root@node2:~# drbdadm status
.drbdctrl role:Primary
volume:0...
it would be nice if proxmox dumps were upgraded and fixed.
Common problem is that while making dump - inside VM is hanging 120 second timeout ..... (even on ssd) (that was on proxmox 3.2 - meaby 4.2 is fixed i didnt check)
thx for answer
I dont really understand why ceph is slow if there are one disk on each node (and this disk is ssd). Drbd has no problems with that as i tested.
Can i make
/dev/sdb1
/dev/sdb2
/dev/sdb3
as "more OSD" ? :)
And what about drbd 8 ? - is this more stable than 9 ?
About gluster i...
In my experience i can say this:.
By using proxmox you save whole bunch of money (like 20000$) by not buying external shared hardware drive (like SAN or NAS)
In proxmox you can do shared storage using free software (ceph or drbd).
Also you save some money by not buying "center managment"...
Hi
I have 3 nodes based on proxmox 4.2
On every node i have whole /dev/sdb free to use as shared ssd storage.
Now.. Tell me this?
I would like this shared storage to be stable because it will be in production :)
I want HA on cluster so i need shared storage - please suggest what is the best...
so should i use drbd instead on those 4 nodes to have shared storage ? or do something else ?
Each node has 2x250 GB SSD in hardware raid seen by system as one disk.
Hi.
My proxmox version is newest 4.2
Is it possible to create ceph with three/four nodes on local-lvm volume ?
Problem is that i dont have second phisical disk like /dev/sdb for usage (because of stupid raid controller which cant make two separate logical disks)
So question is is it possible to...
Hi,
Unfortunatley its not the case.
- it never happened when ide0 is selected (only with virt0 it is happening)
- mostly there is no load at all in that situation but last hang i see that there was tremendous load but i think its caused due to "not accessible" disk. all 3 virtual machined got...
unfortunatley .... after 12 days one of my VM just hunged
It happened short (in 2-3 hours) after backupping in snapshot mode ..
Anyway i had to use ide for now, this is in production so i cant let this happen.
Any thoughts what is causing this and how to fix this for sure ?
from wiki :) - so meaby stable kernel in debian 7 is causing this because it has 3.2 and from backports there is 3.16 :)
VirtIO
Use virtIO for disk and network for best performance.
Linux has the drivers built in since Linux 2.6.24 as experimental, and since Linux 3.8 as stable
Hi guys.
I know that there are some topics with this bug but mine still istn solved.
I have still unsolved problem with some of servers.
I have like 3 host servers and many VMs on those.
Sometimes (totally radom time, and VM) of Guest Debian 7 linux hang...
"task xxx blocked for more than 120...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.