Storage replication on separate NIC

Dec 8, 2022
51
3
8
Planning a sizeable rebuild of my servers very soon. After asking questions on here I've decided I'm going to cluster my three servers together and I believe I've preemptively ironed out all the main details for the questions I had. Initially I wasn't going to bother with HA as I don't know that I want to setup CEPH and shared storage through a NAS seems like it leaves a large point of failure negating HA. Just yesterday I stumbled upon replication and it seems perfect for my use case. A once a day backup to the other nodes, even less to be honest, is perfectly fine for my setup. The guide I saw however doesn't setup a separate hardware NIC for the replication or cluster. I could probably set it up to all use one NIC for everything but I imagine there must be a better way.

I researched, but perhaps through my limited networking understanding I didn't follow the other threads I read. Closest that seemed like it might be good for me ended up ballooning from talk of two NICs, to four in a node. I have two. https://forum.proxmox.com/threads/question-about-creating-a-cluster-and-using-the-right-nic.45044/ Here's my setup network wise:

-Two nodes have 2 physical 10Gb NICS.
-One node has currently just a 1Gb NIC, and I could add a USB 1Gb NIC for a second. If really important I could add a two NIC PCIe card through Thunderbolt as it's an Intel NUC.

All nodes currently have just one network bridge VMBR0 that does management and VM network duty. See the attached picture for the standard issue network interface on my main node.

How do I, on this node for instance, enable enp9s0f1 to be active, have an IP, and be used solely for the replication network to pass that data around? I assume I would use it for the cluster network as well?

When I look at the cluster creation menu, there currently isn't even an IP available other than the main management one. Obviously I need to activate that NIC, but I honestly don't know how and where I would go from there.

Any help would be appreciated. Feel free to link any guides, be it text or video, that goes into better detail, but ideally still takes into account the ELI5 that I kind of need here.

Thanks.
Capture.JPG
 
You can either configure the IP address directly on the NIC or use it as port for a bridge, usually depending on whether you want VMs/CTs to be able to communicate over that network or not (usually not, since you usually only want to use it for Migration Traffic).

An example network config for this can be found in our PVE Docs [1]:

Code:
iface eno1 inet manual

# management network
auto vmbr0
iface vmbr0 inet static
    address 192.X.Y.57/24
    gateway 192.X.Y.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0

# migration network
auto eno2
iface eno2 inet static
    address  10.1.1.1/24

If you select Datacenter in the left sidebar and the go to Options, there is a setting for Migration Network. Now you can specify the network you configured on your second NIC there (in our example this would be 10.1.1.0/24). Now all the migration traffic (including replication) should be handled via the second NIC instead of the management network.

I would add both links to the cluster network, since having a redundant network really helps with the resilience of the cluster [2]. That is, if you are using a second switch to hook up the second set of NICs. Otherwise the switch would obviously still be a single point of failure.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_migration_network
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_redundancy
 
  • Like
Reactions: r0PVEMox
Hi there,

Thank you for taking the time to write up such a detailed response. At least for right now I plan to just use one 10Gb switch for everything, so as you mentioned that single point of failure would still be there, but it is good to know that I should consider a separate switch in the future.

For now, given the single switch, would you recommend having the cluster and migration network on one NIC and subnet, and then having the management and VM/CT network on the other NIC and subnet?

Also, for me, is it acceptable if the management/VM/CT network is the a 192.168.1.x/24 and then I place the cluster/migration on 192.168.2.x/24?

Thanks again for taking the time out of your day to give me some help.
 
Hi there,

Thank you for taking the time to write up such a detailed response. At least for right now I plan to just use one 10Gb switch for everything, so as you mentioned that single point of failure would still be there, but it is good to know that I should consider a separate switch in the future.

For now, given the single switch, would you recommend having the cluster and migration network on one NIC and subnet, and then having the management and VM/CT network on the other NIC and subnet?

Also, for me, is it acceptable if the management/VM/CT network is the a 192.168.1.x/24 and then I place the cluster/migration on 192.168.2.x/24?

Thanks again for taking the time out of your day to give me some help.

Yes, it would definitely make sense to split it up like that. Usually we also recommend a dedicated network for Corosync itself (using the other networks as backup), since Corosync is very latency-sensitive and high traffic (e.g. by backup or migration) on the same network can cause issues with the latency leading to issues with the cluster. If that is not an option then the split you proposed should be fine.

The subnetting should be fine as well. What particular system makes sense for you is hard to tell, but using those /24 subnets should be perfectly fine and reasonable if you don't have any special requirements.
 
Excellent. Thanks again for taking the time. I don't have a third NIC, so won't be a dedicated corosync network, however I only plan to replicate once a day so that network should be relatively left alone. All in all I feel pretty confident in performing the upcoming clustering and replication setup. Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!