Network configuration for a new cluster

Matt998

New Member
Dec 1, 2023
6
0
1
Hi there.

I have 3 nodes and I would like to set up a cluster, using a Dell M5024 SAN as a storage, accessibile via iSCSI.
I know that this setup have drawbacks (no thin provisioning, no snapshots) but I already have the hardware, so I have to deal with it.

Each server has:
  • 8 10Gb ports
  • 2 1Gb ports

My idea is to use:
  • 2 10Gb ports for VMs traffic
  • 2 10Gb ports for iSCSI
  • 2 10Gb ports for corosync
  • 2 10Gb ports for backups
  • 2 1Gb port for management

Questions:
  1. I'd love to spare 2 10Gb ports by using 2 1Gb ports for corosync... but then I would have to setup the management IP on a VLAN on the "VMs" interface. Is it a good idea? Any reason not to do it?
  2. Considering my hardware and wanting to have HA on the cluster, is LVM + multipath a good choice for the storage part?

Thanks a lot!
Matteo
 
I'd love to spare 2 10Gb ports by using 2 1Gb ports for corosync... but then I would have to setup the management IP on a VLAN on the "VMs" interface. Is it a good idea? Any reason not to do it?
if implemented properly, no reason to dedicate 10G to corosync traffic.
Considering my hardware and wanting to have HA on the cluster, is LVM + multipath a good choice for the storage part?
You dont have many other choices, if any at all. Definitely implement multipath. Given your constraints - LVM is also required.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Matt998
In principal, your config is fine but no one touched on switch ports; what are you using for switches, and how are they configured? can you lagg across switches?

thanks! I have two Dell N4032 10Gbit switch, which are *not* stacked.
But, each of them connects to two upstream switches, which are stacked.

Here is the node <-> switch connections for every node:
  • two ports for VMs traffic: each to a different switch, port is set to TRUNK
  • two ports for iSCSI: each to a different swith, port is set to VLAN MEMBER (one VLAN for each controller/path, two in total), no L3 routing needed
  • two ports for corosync: each to a different swith, port is set to VLAN MEMBER, no L3 routing needed
  • two ports for backups: each to a different swith, port is set to VLAN MEMBER, no L3 routing needed
  • two ports for management: each to a different swith, port is set to VLAN MEMBER, L3 routing present

Does it looks ok?
 
Does it looks ok?
well, this means you dont have any fault tolerance. probably not what you want. --edit: I dont know how you'd create a lagg with switches in the middle; why dont you just stack your dells and have an lacp bond for vm and management (probably backup too) traffic?

two ports for VMs traffic: each to a different switch, port is set to TRUNK
two ports for management: each to a different swith, port is set to VLAN MEMBER, L3 routing present

if you have both ports connected to the same vmbr, you'll create a loop and one port will get stp'd off (or worse, lock up the whole vlan if you dont have stp enabled.) what you'd probably want to do here is bond those together active/passive. if you need more bandwidth to your vms, consider making two vmbrs instead but that means you wont have path fault tolerance.

  • two ports for iSCSI: each to a different swith, port is set to VLAN MEMBER (one VLAN for each controller/path, two in total), no L3 routing needed
  • two ports for corosync: each to a different swith, port is set to VLAN MEMBER, no L3 routing needed
for those purposes, this config is ideal. make sure that each port is on a different vlan. when creating your cluster, use both corosync networks.

Two ports for backups: each to a different swith, port is set to VLAN MEMBER, no L3 routing needed
without understanding what the target is, how its connected, or how much bandwidth it can take for a single stream I have no comment.
 
Last edited:
Thanks! Actually I forgot some details about the ports.

My idea was to use a bonding on the nodes, something that does not require switch configuration, like balance-alb
I am not interested in performance but in HA (it's OK if only one 10G interface is operating at time).

I used a similar setup with another virtualization solution (hyper-v) where I had to bond interfaces without having to relay on the switch configuration. Talking about the switch, I cannot change the config at the moment (setting up stack & so on).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!