I have a bond interface configured with two network adapters. This is on all my nodes.
The bond interface carries two networks 10.1.1.x/24 and 10.1.2.x/24
10.1.1.x is meant for Ceph to Ceph communication, i.e. ceph cluster_network is 10.1.1.x/24
While non-Ceph traffic, i.e. VM node to node migration goes on 10.1.2.x, i.e. Proxmox cluster is 10.1.2.x/24
this is done because I currently don't have the budget for the additional 25G/40G switches/NIC to create separate physical networks.
However, ProxMox GUI only see one IP for the bond and if any changes are done to the network configuration, I lose the 10.1.1.x configuration.
This then breaks connectivity for the node from the cluster.
Is there anyway to prevent ProxMox from removing the additional bond configuration or am I doing this incorrectly and there is a better/official way to do this?
The bond interface carries two networks 10.1.1.x/24 and 10.1.2.x/24
Bash:
auto bond0
iface bond0 inet static
address 10.1.1.2/24
bond-slaves enp1s0f0np0 enp1s0f1np1
bond-miimon 100
bond-mode 802.3ad
iface bond0 inet static
address 10.1.2.2/24
10.1.1.x is meant for Ceph to Ceph communication, i.e. ceph cluster_network is 10.1.1.x/24
While non-Ceph traffic, i.e. VM node to node migration goes on 10.1.2.x, i.e. Proxmox cluster is 10.1.2.x/24
this is done because I currently don't have the budget for the additional 25G/40G switches/NIC to create separate physical networks.
However, ProxMox GUI only see one IP for the bond and if any changes are done to the network configuration, I lose the 10.1.1.x configuration.
This then breaks connectivity for the node from the cluster.
Is there anyway to prevent ProxMox from removing the additional bond configuration or am I doing this incorrectly and there is a better/official way to do this?