Configure additional network interfaces

silke

Member
Apr 15, 2025
35
12
8
As you will see from my post i am not a network expert at all. So I started my Proxmox journey with just one of the builtin NICs (RJ45 2,5GB). This one was automatically configured during installation of all three nodes.
But since every node also has two 10GB SFP+ interfaces I bought a 10GB switch and a bunch of cables and would like to make best use of these. Only problem: I have no idea how to configure this. At least I have an idea how it could work in principle:
- use link aggregation to get the maximum possible speed
- have a separate bridge with a separate IP4 net without gateway (since I only have a gateway for my main network)
- use this new bridge for HA traffic (replication) and migration of VMs, later hopefully also for Ceph, perhaps other cluster-internal (management) traffic

At the moment I have 4 NVMe in each of my nodes. Two of these are already set up as a ZFS mirror since I got the advice that this is the most reliable option. But since I have the disks and the network speed I would like to at least experiment with Ceph. So the intended configuration should support both the existing ZFS HA traffic (away from the existing bridge) as well as future Ceph traffic. The existing 2.5GB connection should be only used for the VMs then.

Can anyone help me how to set this up? Is there a sample /etc/network/interfaces file for such a configuration? Are there other config files I will have to deal with?
 
Hey,

[1] is probably a good start. But generally you add the physical NIC to a bridge("Linux Bridge"), then do IP configuration on the bridge. If you plan to create a bond, you first add the physical NICs to the bond[2], and then the bond to the bridge. For ceph[3] you configure the IP, not the NIC so that is done afterwards. These are the basics, and should cover most things, especially what you have described here. (Not really relevant here, but if you're interested you can also take a look at SDN[4].)

If you have any more specific questions, feel free to ask :)


[1] https://pve.proxmox.com/wiki/Network_Configuration
[2] https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_bond
[3] https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_install_wizard

[4] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvesdn
 
Thanks, I think I can now add my NICs to /etc/network/interfaces at least good enough to get a ping through these connections. What I didn't find yet: where do I specify which bridge to use for the different tasks? E.g. where to set vmbr1 as the one for replication and migration of VMs and vmbr2 for cluster networking (the one that checks if all nodes of the cluster are live). I did not find anything about this in the GUI neither in the Network_Configuration Wiki page.

I also have a special question about the cluster network. I read it is recommended to have a distinct NIC for this to avoid false "down" reactions when the connection is filled up by other traffic which could happen on a shared connection. My problem is that I have three nodes but only two ports left on my switch (a bigger one doesn't fit into my 10" rack). I plan to mostly use the first two nodes for real work. The third node is mainly to allow HA but has not much to do. Is it a good idea to use a distinct NIC on the first two nodes and on the third node use vmbr0 that is also used for VM traffic by adding a second IP to this bridge?

Greetings,
Silke
 
@silke
You’re correct: Proxmox (and corosync) recommend a dedicated NIC for the cluster to avoid heartbeat issues during congestion.

A workable compromise:
  • Node1 & Node2: connect a dedicated NIC each to your spare switch ports, use this as the cluster network (vmbr2).
  • Node3: can share the VM traffic NIC (vmbr0) with an additional cluster IP. Corosync doesn’t use a lot of bandwidth, so it’ll be fine for a lightly used node. The only downside is if VM traffic saturates vmbr0, node3 might briefly lose quorum visibility — but since Node1 + Node2 have dedicated cluster NICs, the cluster as a whole should stay stable.

Good luck!
 
where do I specify which bridge to use for the different tasks
/etc/pve/ceph.conf contains the subnets to use (cluster_network and public_network).

I don't have info handy on changing IPs but from vague memory you can change the setting and restart the services. I'm sure it's been asked before on the forum since I seem to recall reading it somewhere.
 
Node3: can share the VM traffic NIC (vmbr0) with an additional cluster IP.
How is this done? Google suggests to add something like this line:
Code:
up ip addr add 192.168.1.11/24 dev $IFACE
to the vmbr0 config. Should this work? And then just add the (example) 192.168.1.11 to corosync.conf for node3 to use it as intended?

And for Node1 and Node2 do I really need a bridge? Wouldn't it be enough to add their IPs simply to the interface? something like:
Code:
iface eno1 inet static
  address 192.168.1.10/24
???

And is corosync.conf automatically replicated between the nodes or do I have to change the IPs on all three? If the latter, do they all get the same config_version?

Sorry for so many questions, but so much still to learn ;-)

Thanks,
Silke