Try a cache tier: http://technik.blogs.nde.ag/2017/07/14/ceph-caching-for-image-pools/
And lookup bcache, havent tested yet but forum posts on here say its really nice!
Massive post! Should be sent to the ceph users mailingslist. Also, I really am aching to enable bcache to see those write improvements!
PS. Mind telling us how you disabled the 4mb feature? DS.
It didn't take too long for us to realise that bcache comes from a time when SSDs were fast at...
I have had the same issue since the beginning and the only thing that "helped" was setting up a cache tier as per http://technik.blogs.nde.ag/2017/07/14/ceph-caching-for-image-pools/
I tried updating the network to 10gb, uping the DB/WAL for osds, writeback cache, more bluestore cache etc..
Hi,
It happens ever so often that some of the machine I try to shutdown hang. I am then forced to kill the process from the shell.
There are no logs as far as I can see either, so not sure how to t-shoot this..
Anyone else experiencing this ?
Hi,
I managed to get my HP Z400 passthrough my Nvidia 750ti (Server 2016/BIOS not UEFI) by following this guide: https://github.com/sk1080/nvidia-kvm-patcher
You will basically set the server into "test mode", then patch an old driver with the above and it works.
Oh and putting in another card...
This did it for me as well, with the latest updates installed etc.
Thanks!
EDIT:
Was forced to disable hpet as well ..
if ($winversion >= 6) {
push @$globalFlags, 'kvm-pit.lost_tick_policy=discard';
# push @$cmd, '-no-hpet';
}
Thanks for the heads up!
I will add some warnings to the script and update it on the forum here.
It would be good if the destroy button actually destroyed the disk from the GUI.
Is it possible for me to edit the commands of the GUI button in proxmox ?
Hi,
I've had issues when I put in new journal disks and wanted to move existing disks from one journal disk to the new ones.
The issues where, I set the osd into Out mode, then Stopped the OSD, and destroyed it.
Recreating the OSD with the new DB device make the OSD never to show up!
This is a...
I mostly get the same abyssmal Ceph speed as well on 12 OSDs with 10gbit backend. I did install Kingston V300 för the journals and then it maxed out my 1gbit network at least for the first GB and then it went down to 1mb-40mb/sec again.
Will install a third node soon, with a good amount of OSDs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.