Since KVM is singlethreaded and ceph is multi, I tried adding 4 new HDDs (VirtIO block) and did a raid 0 in Windows which gave me much better speeds.
Next step is ceph cache tiering + ssd/nvme journaling or if you are really brave, bcache
SSD cache pool for me did wonders, but I also have journals on separate SSD's. Followed the guide here: http://technik.blogs.nde.ag/2017/07/14/ceph-caching-for-image-pools/
You should also consider upgrading the network to a 10gb. And all disks should be in the pool as more disks = faster...
A journal SSD with 1GB journal per TB slow drive. So about 8GB partitions for your 8TB drives. Also take a look at a cache layer, this did WONDERS for me.
Here is the guide I followed, pretty simple:
http://technik.blogs.nde.ag/2017/07/14/ceph-caching-for-image-pools/
I forgot to paste the public network as well, sorry for that, original post has been edited!
After that, I set the cluster network IPs (172.16.0.11 then 12 etc, in my case) in the proxmox GUI, reboot and its done :)
This is what I have in my "vi /etc/ceph/ceph.conf "
cluster network = 172.16.1.0/24
public network = 192.168.1.0/24
[mon.pve]
host = pve
mon addr = 192.168.1.12:6789
[mon.pve11]
host = pve11
mon addr = 192.168.1.11:6789
[mon.pve3]
host = pve3...
Am posting this here in case anybody searches for this in the future.
https://github.com/fulgerul/ceph_proxmox_scripts
#
# Install Ceph MDS on Proxmox 5.2
#
## On MDS Node 1 (name=pve11 / ip 192.168.1.11)
mkdir /var/lib/ceph/mds/ceph-pve11
chown -R ceph:ceph /var/lib/ceph/mds/ceph-pve11
ceph...
Hi,
Adding cluster network and reloading ceph should suffice.
Here are the commands to do a live reload..
systemctl stop ceph\*.service ceph\*.target
systemctl start ceph.target
Specifying host is only needed for mons, unless you wanna be super picky about it and define hosts in osds as well...
Hi,
Thanks for the answer! I will try tinc and OVPN as well, just wanted to chck if anyone is running multi-site vpn + proxmox that might give me some gotchas! :)
I am expanding now and wonder if anyone has tested site-to-site ? There is an URL that uses tinc but I keep reading about bad speeds.
So am wondering if anyone practices this? I have a wireguard vpn that maxes out my WANs but cannot for the life of me get multicast to work for now..
Its gets old fast to remove everything from HA and to readd all machines + name in details. Any way we could get names and not just IDs inside this view ?
TLDR; Forgot to set the Protected setting!
Can we get this by default or give us the option to set this by default on all VMs ?
Will KVM do multi-threaded RBDs in the future? I see some posts about it...
So in my quest of faster speeds inside the VMs with ceph as the underlying storage, I got a...
So I just wanted to share this issue that I've been having with my cluster. I had loads of RRD cache issues, so I had to reset a whole bunch of services, but got it working. Then the HA cluster stopped working.
After a quick systemctl status pve-ha-crm.service
I saw this...
pve...
So I had disabled cephx and then enabled it again. But still got the error (maybe pvestatd should check if cephx is enabled again).
My solution for the "pvestatd: rados_connect failed - Operation not supported" was therefore the below
cd /etc/pve/priv/ceph/old/
mv * ../
Now my proxmox GUI...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.