replace sfp+nic in a ceph cluster

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
I need to replace a 10gb sfp+ 2ports nic with a similar nic that provides 4 ports instead of 2. This particular nic is serving the internodal ceph network in a meshed network configuration, so no switches inside the ring. I'm in a production 3 node cluster with ceph and latest proxmox. replica size is 3 and min_size is 2. which is the best way to do this operation without stopping the production? many thanks
 
1st what is a 'internodal' type network?

Are you using both ports on the existing nic?

If there is room to keep the existing nic and add the new one - then consider just adding the new one . ( Note on our systems we found that linux starts naming with the nic on the right [ when looking at the back of the motherboard] . If a new nic were added to the right then the 1st nic would have a different device name then before.)

after that you could configure one port at a time on new nic and subtract from the old nic.
 
Sounds like my home lab - 3 Proxmox nodes with Ceph, each connected to each other via a 2-port 10Gb NIC in a broadcast mesh network for Ceph.

If that's the case, you should be able to just move any VMs and containers to another node, down the host and perform your maintenance. Proxmox and Ceph should be able to sort things out as the node comes online.
 
1st what is a 'internodal' type network?

Are you using both ports on the existing nic?

If there is room to keep the existing nic and add the new one - then consider just adding the new one . ( Note on our systems we found that linux starts naming with the nic on the right [ when looking at the back of the motherboard] . If a new nic were added to the right then the 1st nic would have a different device name then before.)

after that you could configure one port at a time on new nic and subtract from the old nic.
I'm sorry but I'm writing from Italy..we usually use this term to identify a meshed network(interna ai nodi), in this particular case is the ring network built on the three nodes. So I have 2 "internodal networks", one is for ceph and is using both ports of existing 10Gbs nics(for a 3node ring you need 2 ports per node), and I have another network for the proxmox cluster that is provided by a 4port Ethernet GB nic.

The big problem is that I don't have enough pci slots available, so I need to remove the 2 port nics and replace on the same slot the new 4 port nic.
 
Sounds like my home lab - 3 Proxmox nodes with Ceph, each connected to each other via a 2-port 10Gb NIC in a broadcast mesh network for Ceph.

If that's the case, you should be able to just move any VMs and containers to another node, down the host and perform your maintenance. Proxmox and Ceph should be able to sort things out as the node comes online.
I think this too but is in a production environment and I'm a little scared about moving things in a blind way
 
I think that the only think that I can do is to create a Vbox environment similar to the production one, and then try to change nics and see what happens.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!