What should I do with my extra 2.5gbs nics?

jptechnical

New Member
Mar 17, 2023
11
3
1
I have a 3 node cluster I use for testing (just upgraded to ver8), and I finished my evaluation of ceph but didn't find it suitable for what I intended. My lab network is all 1gb, and I don't push much across it. But I have this dedicated 2.5gb ethernet backbone I used for ceph, and I am wondering if I can use it for cluster traffic and zfs syncing.

Is there a benefit to using this faster link to somehow dedicated it to intra-cluster traffic? I also setup SDN in the cluster, can this help here as well?
 
But I have this dedicated 2.5gb ethernet backbone I used for ceph
That's a bit slow for ceph.

and I am wondering if I can use it for cluster traffic and zfs syncing.
Yes. Not really useful for corosync, as you usually want a dedicated low latency NIC/network for it (so preferable a Gbit NIC with no traffic at). But could be useful as a dedicated network for replication/migration/NAS/backups.
 
That's a bit slow for ceph.
It is, but it was a lab and a proof of concept. I was examining the benefits of ceph over zfs sync for a 'cluster in a box' that I was considering for a client. The storage cluster was fine, if a bit slow, but it was the recovery after power failure that eliminated it from the running, there was too much manual intervention necessary for a place with poor redundant power. ZFS sync proved to be better suited in that example.

So where do I start for using these nics for the replication/migration? I am not sure where to start in the docs. https://pve.proxmox.com/pve-docs/pve-admin-guide.html I don't know the right term for the setup or the configuration.
 
You create a dedicated subnet for the 2.5G NICs and then use the 2.5G subnet IPs instead of the 1G subnet IPs when setting up services. For backups just add the PBS/SMB/NFS with that 2.5Gbit IP. For migration you can set the migration network at Datacenter -> Options -> Migration Settings.
 
  • Like
Reactions: jptechnical