Hello,
I am new to Proxmox VE and I am not sure how to configure my 3-node cluster network to achieve a stable configuration. I have two physical NICs, then I created a bond that uses the two physical NICs (in active-backup mode, later on LACP will be probably activated). Then three Linux VLAN interfaces have been created on the top of my bond:
- The first one is for the Management and Storage Network (bond0.vlanid, and where I configure my Management IP and the Gateway)
- The second one (is not routable from the network) is for the Corosync (bond.vlanid, and where I only configured an IP 192.168.10.xx)
- The third one (is also not routable from the network) is for the Migration (bond.vlanid, where I only configured and IP 192.168.11.xx)
Then a vmbr0 has been created that uses also the bond0 and it is VLAN-aware (Physically, I am connected to a Trunk), because I want to use this bridge later in the SDN to create the VLANs that will be attached to my virtual machines.
The problem that I have now is when the server reboots or the network service restarts, my network does not work anymore (connection to the server, my bridge, etc), but with the "ifreload -a" everything runs again.
Do you have an idea how I could setup the network part to achieve what I mentioned above (If you could provide me with a guide on how to configure it, that would be great).
Kind Regards,
Daniel
I am new to Proxmox VE and I am not sure how to configure my 3-node cluster network to achieve a stable configuration. I have two physical NICs, then I created a bond that uses the two physical NICs (in active-backup mode, later on LACP will be probably activated). Then three Linux VLAN interfaces have been created on the top of my bond:
- The first one is for the Management and Storage Network (bond0.vlanid, and where I configure my Management IP and the Gateway)
- The second one (is not routable from the network) is for the Corosync (bond.vlanid, and where I only configured an IP 192.168.10.xx)
- The third one (is also not routable from the network) is for the Migration (bond.vlanid, where I only configured and IP 192.168.11.xx)
Then a vmbr0 has been created that uses also the bond0 and it is VLAN-aware (Physically, I am connected to a Trunk), because I want to use this bridge later in the SDN to create the VLANs that will be attached to my virtual machines.
The problem that I have now is when the server reboots or the network service restarts, my network does not work anymore (connection to the server, my bridge, etc), but with the "ifreload -a" everything runs again.
Do you have an idea how I could setup the network part to achieve what I mentioned above (If you could provide me with a guide on how to configure it, that would be great).
Kind Regards,
Daniel