How can I change ceph to other subnet while running vps?

jjc27017

Member
Dec 14, 2017
40
0
6
36
As topic, I am running a cluster proxmox and use ceph as storage. But at first, we just test it so we use the same subnet both ceph and proxmox. But it seems that the network affect the speed of ceph. It only reach about 10MB/s wirte speed and the vm or lxc will stuck for a few minute (example: run command vim file and :wq to save then it stuck for a few minute then exit to terminal).

So we want to use the second network interface to check with solve this problem.And I want to ask if we config the network which is working, then I only just edit /etc/ceph/ceph.conf then it will run as the other subnet itself?
 
Every ceph service has to be restarted after changing the subnets.

10MB/s is very slow, even if you have all traffic on one 1GB/s interface (not recommended). How does your setup look like (hardware/config)?
 
Every ceph service has to be restarted after changing the subnets.

10MB/s is very slow, even if you have all traffic on one 1GB/s interface (not recommended). How does your setup look like (hardware/config)?

Yes. I test with other sever which has SAS disk it can reach 50MB/s write speed. And now I just use SATA as ceph storage (because of money so we do not use SSDs)

But I think network is bigger issue. Because the subnet is using by lots of servers.
 
Every ceph service has to be restarted after changing the subnets.

10MB/s is very slow, even if you have all traffic on one 1GB/s interface (not recommended). How does your setup look like (hardware/config)?
Hi Alwin,

May I confirm that I should stop all vms and lxcs then stop osds and mons then edit the /etc/pve/ceph.conf public network and cluster network and mons IP address then restart the sevirces and it will be changed?
 
May I confirm that I should stop all vms and lxcs then stop osds and mons then edit the /etc/pve/ceph.conf public network and cluster network and mons IP address then restart the sevirces and it will be changed?
You can edit the config beforehand and then restart services. There will be an interruption of service, if the VMs/CTs are offline, even better.

Last but not least, make backups before changing. Just to be safe. ;)
 
You can edit the config beforehand and then restart services. There will be an interruption of service, if the VMs/CTs are offline, even better.

Last but not least, make backups before changing. Just to be safe. ;)

Sorry for the delay reply, I have change the second network interface, but it show the same issue.
I want to know the osds of Ceph in Proxmox, what is the common delay (ms) for the osds.

upload_2018-4-19_10-42-4.png
It seems the delay is too long?? Is it the best delay such as 3 or 4 ms for each osds?
 
So you are using filestore as backend, as the values for bluestore are usually smaller. It greatly depends on your setup, how long it takes for an object to be written to the OSD. You need to make tests and tune ceph if it not fits your expectations.
 
I mostly get the same abyssmal Ceph speed as well on 12 OSDs with 10gbit backend. I did install Kingston V300 för the journals and then it maxed out my 1gbit network at least for the first GB and then it went down to 1mb-40mb/sec again.

Will install a third node soon, with a good amount of OSDs and see if it helps with ceph performance, else I will go GlusterFS or something cause Ceph on Proxmox doesn't work as it should in my opinion.
In the GUI you can't even define the journal size, even if you configure it in /etc/pve/ceph.conf.

Looking at the Ceph mailinglist shows that most of the users are in big datacentre, so even mentioning ANY consumer grade disks/journal SSDs will get you laughed at..
 
Check out our ceph benmark paper and the thread with its hardware comparisons.
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/

Looking at the Ceph mailinglist shows that most of the users are in big datacentre, so even mentioning ANY consumer grade disks/journal SSDs will get you laughed at..
It doesn't matter if big or small setups, you get what you pay for. Test does SSDs with the fio tests provided in the benchmark paper and you will be able to compare it to other peoples tests.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!