Create virtual 10G SFP+ switch for Ceph?

RMDrinan

New Member
Mar 19, 2022
10
0
1
I read the full mesh article for ceph > https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server?ref=x14-kod-infrastruktur-ab
While I did get three nodes to be able to ping each other over DAC, I have 4 PVE nodes that I would like to use for Ceph with 10G SFP+ DAC connections.
With the SDN functionality in 8.1, would it be possible to have a couple of cheap dual-port 10G cards (HP 530SFP+ are $15-20 each) in the most powerful workstation and turn those dual-port cards into a virtual SFP+ switch so the other 3 nodes could each connect into it with a single DAC cable and a virtual switch would then send traffic out the correct interface for the node intended? Using OpenSwitch, etc. would be fine - I just don't know if the idea is even feasible.
If there are any cheap 10G SFP+ switches for a home-lab - i.e quiet, small, and used for ~$100.00 range then I would use that but I haven't seen any. I have seen there are old enterprise switches with many 10G SFP+ ports but I imagine they are all loud and overkill for me.
 
CEPH thrives on the fact that everything is redundant. You should rather use the mesh setup or one, better two, decent switches instead of a home-made switch. There are now some consumer devices that often cost more than enterprise devices, but they are quieter and use less power.
 
CEPH thrives on the fact that everything is redundant. You should rather use the mesh setup or one, better two, decent switches instead of a home-made switch. There are now some consumer devices that often cost more than enterprise devices, but they are quieter and use less power.
Thanks for the reply @sb-jw,
This is a home-lab though, so I am not storing any critical data. It will host my VMs which I also backup via ProxMox Backup server - using separate storage of course. I want to learn the networking in ProxMox better, but I also don't want to put Ceph on a setup like I mentioned if it will be nothing but headaches. I am really just trying to understand if the setup could work and the best way to implement it. The intent of my homelab is to experiment and learn. However, I do want the "platform" (proxmox and storage) to be relatively stable since I learn other things in the VMS. I plan to test this setup and assign an IP to the bridge and to each NIC of PVE2,3,4 to see if everything communicates. I think for configuring Ceph, I would basically follow the full mesh article (routed simple) scenario and PVE2,3,4 would simply have an IP address per NIC and an IP for the bridge. Then as an example, when the PVE2 ceph daemon wants to communicate to PVE4, for the config section below, I would use the full /24 for the route so the virtual switch would see an inbound request for MAC of PVE4 and route it out the interface for PVE4. I'm not very familiar with DAC configs but I assume it won't really make a difference in this scenario.
I found cheap DAC cables (that I confirmed work) so I could theoretically have two cheap dual-port cards in PVE2 and basically have a cross connect network switch setup with two virtual switches - and it would all cost less than buying a physical unmanaged 10G SFP+ switch which would also be a single point of failure. I could then experiment with VLANs on top of those virtual switches - although I wouldn't mess with the Ceph subnet - wouldn't even make that a vlan to keep it simple. I included a very simple diagram of the plan - before going to two virtual switches possibly.

iface ens18 inet static
address 10.15.15.51
netmask 255.255.255.0
up ip route add 10.15.15.50/24 dev ens18
down ip route del 10.15.15.50/24
 

Attachments

  • Ceph Virtual Private Switch.png
    Ceph Virtual Private Switch.png
    22.6 KB · Views: 17
Thanks for the reply @sb-jw,
This is a home-lab though, so I am not storing any critical data. It will host my VMs which I also backup via ProxMox Backup server - using separate storage of course. I want to learn the networking in ProxMox better, but I also don't want to put Ceph on a setup like I mentioned if it will be nothing but headaches. I am really just trying to understand if the setup could work and the best way to implement it. The intent of my homelab is to experiment and learn. However, I do want the "platform" (proxmox and storage) to be relatively stable since I learn other things in the VMS. I plan to test this setup and assign an IP to the bridge and to each NIC of PVE2,3,4 to see if everything communicates. I think for configuring Ceph, I would basically follow the full mesh article (routed simple) scenario and PVE2,3,4 would simply have an IP address per NIC and an IP for the bridge. Then as an example, when the PVE2 ceph daemon wants to communicate to PVE4, for the config section below, I would use the full /24 for the route so the virtual switch would see an inbound request for MAC of PVE4 and route it out the interface for PVE4. I'm not very familiar with DAC configs but I assume it won't really make a difference in this scenario.
I found cheap DAC cables (that I confirmed work) so I could theoretically have two cheap dual-port cards in PVE2 and basically have a cross connect network switch setup with two virtual switches - and it would all cost less than buying a physical unmanaged 10G SFP+ switch which would also be a single point of failure. I could then experiment with VLANs on top of those virtual switches - although I wouldn't mess with the Ceph subnet - wouldn't even make that a vlan to keep it simple. I included a very simple diagram of the plan - before going to two virtual switches possibly.

iface ens18 inet static
address 10.15.15.51
netmask 255.255.255.0
up ip route add 10.15.15.50/24 dev ens18
down ip route del 10.15.15.50/24
I have this partially working now using the OVS Switch / rstp-loop scenario. PVE4 has a dual-port NIC with both assigned to the OVS switch. PVE2 and PVE3 each have a single port 10G SFP+ NIC with DAC cables connected into the OVS switch and PVE2, PVE3, and PVE4 can ping 10.15.15.52, 10.15.15.53, and 10.15.15.55 (the IP of the OVS switch). I have a couple more cheap dual-port 530SFP+ nics coming from ebay to add PVE1 into the mix.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!