How to reconfigure the network in ceph on Proxmox VE?

emptness

Member
Aug 19, 2022
110
6
23
Greetings!
We are going to deploy the Proxmox VE cluster together with Ceph. At the moment we have 10 Gbit network cards for the Ceph network. But in the future we would like to replace them with faster ones (100 Gbit).
Will we be able to do this without problems for ceph (so that it does not degrade and we do not lose data)?
Proxmox VE provides for changing network interfaces for ceph by regular means?
Maybe someone can give links to instructions?
Unfortunately, I could not find manuals on this topic.
 
Hi,

Will we be able to do this without problems for ceph (so that it does not degrade and we do not lose data)?
Yes, the change in network interfaces should work without any issues. We recommend doing it on a test PVE cluster first to avoid misconfiguration before you apply the production cluster.

Proxmox VE provides for changing network interfaces for ceph by regular means?
Maybe someone can give links to instructions?
To change the network interface, make sure that the `ifupdown2` is installed already. In PVE 7.x is installed by default. Then you have replaced the old NIC name to the new NIC name from PVE UI or by editing the /etc/network/interfaces file.
 
Hi,


Yes, the change in network interfaces should work without any issues. We recommend doing it on a test PVE cluster first to avoid misconfiguration before you apply the production cluster.


To change the network interface, make sure that the `ifupdown2` is installed already. In PVE 7.x is installed by default. Then you have replaced the old NIC name to the new NIC name from PVE UI or by editing the /etc/network/interfaces file.
Thank you for your reply!
Did I understand you correctly that it will be necessary to simply assign the same ip to the cluster network to an new network card /etc/network/interfaces?
But what if the new network cards on nodes are connected to another switch? We want to switch to a 100 Gbit optical network.
 
But what if the new network cards on nodes are connected to another switch? We want to switch to a 100 Gbit optical network.
In this case, after you configure the new NIC, you have to set the new IPs for the 100 Gbit Ceph.conf file in `/etc/pve/ceph.conf`. Then replace the `cluster_network` and `public_network` after you edit the IPs, you have to restart the OSDs, and regarding the monitors, you have destroyed one and created a new one for the first node, then on the second node, and so on...
 
Yes, I have almost the same question.
On my 3-nodes cluster, currently CEPH network is done through a switch, but we'd add more nics and make a full mesh network to exclude the switch as a single point of failure.

Is there a way to stop all CEPH services completely, reconfigure the network on the nodes and start them again?
 
In this case, after you configure the new NIC, you have to set the new IPs for the 100 Gbit Ceph.conf file in `/etc/pve/ceph.conf`. Then replace the `cluster_network` and `public_network` after you edit the IPs, you have to restart the OSDs, and regarding the monitors, you have destroyed one and created a new one for the first node, then on the second node, and so on...
Nothing will happen to the information stored on OSDs?

And another question, what if we write in /etc/pve/ceph.conf DNS-name instead of ip and /etc/hosts in advance we specify two ip addresses for each dns. the first is from existing network cards, and the second is from future ones. And when we put 100Gbit we will remove ip from 10 GBit network cards?
 
Sorry for the late answer

Nothing will happen to the information stored on OSDs?
Should yes, as said I would do this in a PVR test cluster first to know what the steps are and what the issues, might do by mistake.


And another question, what if we write in /etc/pve/ceph.conf DNS-name instead of ip and /etc/hosts in advance we specify two ip addresses for each dns. the first is from existing network cards, and the second is from future ones. And when we put 100Gbit we will remove ip from 10 GBit network cards?
I didn't test that before, so I'm sorry I can't say if that will work as expected or not, however, as with the original and known configs, I would recommend using the IPs instead of the DNS in the ceph.conf, maybe someone here has tested that he will reply and tell us the experience with that.
 
Hello everybody!
I have set up a test cluster.
eth0 - access to the VM network+ proxmox cluster.
eth1 - public and ceph cluster networks.
eth2 is empty.
I want to migrate the ceph cluster network to eth2.
I changed the configuration of cluster_network = 172.16.22.2/24 (eth2 network) on all nodes. I rebooted all the servers, but nothing has changed. how do I check that ceph is now doing osd replication via eth2?
 
You can run ' ss -tunap|grep ceph-osd' and confirm that there are connections on the new cluster subnet.

Note that the subnets in cluster_network act as a filter for accepting incoming cluster connections, so if you want to change the networks non-disruptively you will need to ensure that there is routing between the old and the new subnets and specify both the old and the new subnets in the cluster_network parameter, then reboot each osd, and after all the osds rebooted and listening on the new subnet, you can remove the old subnet... You may also want to use 'cluster_addr' parameter for each OSD to ensure that the OSD is using only the new network (that's what I did when I migrated to the new subnets in my lab).
 
You can run ' ss -tunap|grep ceph-osd' and confirm that there are connections on the new cluster subnet.

Note that the subnets in cluster_network act as a filter for accepting incoming cluster connections, so if you want to change the networks non-disruptively you will need to ensure that there is routing between the old and the new subnets and specify both the old and the new subnets in the cluster_network parameter, then reboot each osd, and after all the osds rebooted and listening on the new subnet, you can remove the old subnet... You may also want to use 'cluster_addr' parameter for each OSD to ensure that the OSD is using only the new network (that's what I did when I migrated to the new subnets in my lab).
Great!
I will try.
Thank you very much!
 
...would someone direct me to step-by-step guide on how to replace the existing CEPH network that is at the moment shared with VM/Proxmox host network and put it on its own (additional) network cards ?

Thanks !
 
Can you share your /etc/network/interfaces and /etc/pve/corosync.conf and /etc/pve/ceph.conf ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!