Old server cluster - 6 x 1gb nic - best way to config?

GoZippy

Member
Nov 27, 2020
112
2
23
45
www.gozippy.com
I have a bunch of older servers - almost all have a 4port x 1GB card and 2x onboard gb ports.

View attachment 34485

right now I only am using one of the on board nics for all the nodes - one of the onboard ones... I have a linux bridge assigned vmbr0 to that on-board port and then all the VM's LXCs run over that one 1GB port to my switch and then to 1GB WAN uplink

View attachment 34486

Works fine - until I added ceph cluster over the whole stack of nodes... still works fine - but I do see a lot of traffic and some issues with speed moving or migrating live vm from one node to the other...

I would like an expert to tell me (other than to buy new hardware, of course that is a given... maybe any tips on cheap 10gb cards I can install)... but advice on HOW to separate Internet uplink traffic from local corosync, ceph and management --> using all 6 of these available ports.

I was thinking of using the 4 port card for all ceph and local node to node - but not entirely sure what is best to do for setup. Setup Bonds 0/1 and 2/3 and assign somehow pve coms over one bong and ceph over the other? How?

I do want to use HA and learn this best practice... it all works swell on one nic - but I am 100% positive if I get much traffic it will crawl to a halt and mess with the ceph cluster speeds and everything else.

What would you suggest I do with 6x1GB nics?
 
On my r420 I have a pcie 2x10gb card and 6x1gb ethernet nics (2x onboard, 4x pcie). Until I get some copper 10gb sfps for the 10gb card I am using the 1gb nics bonded as lacp connected to my switch. 2x1gb ISP bond and 4x1gb LAN bond. If you have a switch that supports lacp I would look into that.
 
I have 10G NICs with six ports total per node and have the ports like this:

* Main (to firewall/Internet)
* Ceph 1
* Proxmox Corosync
* DB/SQL/Internal
* Migration
* Ceph 2


I'm not sure what is ideal either, but that is how I broke it down. The "Main" connection is the IP I use to ssh/web into the Proxmox cluster. I use two Ceph switches with "balance-rr", which is probably less than ideal, but it is working. The Proxmox/corosync interface is internal IPs only. I only have one interface for Corosync, but really it is better to have two. The "DB" interface is just for the nodes KVMs talking to each other. For example, a webserver talking to a database server would use that internal interface. Then there's a separate interface for doing migrations. And finally, the other Ceph interface that forms part of the Ceph bond0. Each interface (or bond) is then bridged, so each is accessible via vmbrX. This isn't perfect, but maybe give you something to chew on while planning your network.
 
@GoZippy Corosync is defined by what network it is on (e.g. 10.1.1.0/24). So just assign the interface you want to use to your Corosync interface. When you "join" a cluster, it asks which network you want to use.
 
OK but assuming I dont want to redo everything and reinstall from scratch... just wondering where to change config or how it works to change it after its been on another subnet already... or how to change to different nic within config... oh well I will look some more
 
OK but assuming I dont want to redo everything and reinstall from scratch... just wondering where to change config or how it works to change it after its been on another subnet already... or how to change to different nic within config... oh well I will look some more

Ya, you don't have to redo everything from scratch. Just edit the network config to move the IP you want to use to the interface you want. No need to re-join.
 
Also - it is weird that on most nodes it shows eno1 as the active port...

but on like node 7 it shows enps0f0 as the active nic port...

I currently only have one port connected - the on-board port which SHOULD reports as eno1 but for some odd reason ProxMox thinks its the physical 4 port card attached to pcie.

anyhow - ideas welcome on the network setup to optimize traffic and sync and ceph data...

1646002704272.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!