I'm am not new to hypervisors, but I am new to Proxmox. I'm trying to build a proof of concept for my enterprise, and I'm attempting to follow best practices so not to hate myself down the road. I'm running into some advanced network configuration issues, and I am not finding clear directions.
Plan:
I am planning to build out a 3 node cluster with 4x 25G ports and 2x 10G ports, 2 switches with 10/25/100 G speeds, and 1 TruNas SAN with 4x 25G ports and 2 10G ports. I intend to dedicate 2x 25G ports for the prox host url mgt along with all of my other vm vlan traffic using OVS switch bond, bridge, and initports which are physically connected to my switch whose ports are configured with port-channel, switchport, vlans, etc. Next, I intend to dedicate the other 2x25G ports to storage via iscsi and vlan traffic over physical connections to the same switch. Lastly, I was planning to setup the 2x10G ports for the cluster corosync traffic. I was originally planning to setup a bond for these two ports using the active-backup mode, but after more reading, it seems this isn't the best idea. I was hoping to use these as reduntant connections incase one switch went down.
Finally, I am planning to move over to SDN once the cluser is up and running. I would also like to play with Prox's IPAM and Netbox since my company is already using netbox.
I am not planning on using MLAG at this time for the POC, but down the road, I would like to revisit it.
I have successfully created the OVS bond/bridge/initport for two physical links with the static IP, but I'm stuck at trying to get the next physical link up for the corosync. I cannot successfully ping from the node to the l3 ip on the vlan on the switch and vis versa.
Here's a copy of the /etc/network/interfaces (note I've had to revise this for external use)
Plan:
I am planning to build out a 3 node cluster with 4x 25G ports and 2x 10G ports, 2 switches with 10/25/100 G speeds, and 1 TruNas SAN with 4x 25G ports and 2 10G ports. I intend to dedicate 2x 25G ports for the prox host url mgt along with all of my other vm vlan traffic using OVS switch bond, bridge, and initports which are physically connected to my switch whose ports are configured with port-channel, switchport, vlans, etc. Next, I intend to dedicate the other 2x25G ports to storage via iscsi and vlan traffic over physical connections to the same switch. Lastly, I was planning to setup the 2x10G ports for the cluster corosync traffic. I was originally planning to setup a bond for these two ports using the active-backup mode, but after more reading, it seems this isn't the best idea. I was hoping to use these as reduntant connections incase one switch went down.
Finally, I am planning to move over to SDN once the cluser is up and running. I would also like to play with Prox's IPAM and Netbox since my company is already using netbox.
I am not planning on using MLAG at this time for the POC, but down the road, I would like to revisit it.
I have successfully created the OVS bond/bridge/initport for two physical links with the static IP, but I'm stuck at trying to get the next physical link up for the corosync. I cannot successfully ping from the node to the l3 ip on the vlan on the switch and vis versa.
Here's a copy of the /etc/network/interfaces (note I've had to revise this for external use)
Code:
#PVE Open vSwitch Configuration
# Loopback interface
auto lo
iface lo inet loopback
# Bond eth0 and eth1 together
auto eth0
iface eth0 inet manual
ovs_mtu 9000
auto eth1
iface eth1 inet manual
ovs_mtu 9000
auto bond0
iface bond0 inet manual
ovs_bridge vmbr0
ovs_type OVSBond
ovs_bonds eth0 eth1
ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast other-config:bond-rebalance-interval=0
ovs_mtu 9000
# Bridge for our bond and vlan virtual interfaces (our VMs will
# also attach to this bridge)
auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 vlan2 vlan4
ovs_mtu 9000
# Proxmox cluster communication vlan
auto vlan2
iface vlan2 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=2
address 10.0.1.2
netmask 255.255.255.0
gateway 10.0.1.1
ovs_mtu 9000
# Ceph cluster communication vlan (jumbo frames)
auto vlan4
iface vlan4 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=4
address 10.0.8.8
netmask 255.255.255.0
ovs_mtu 9000
# Bridge for eno1 physical interfaces cluster (corosync)
auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports eno1 vlan7
ovs_mtu 9000
# Physical interface for traffic coming into the system. Retag untagged
# traffic into vlan 1, but pass through other tags.
auto eno1
iface eno1 inet manual
ovs_bridge vmbr1
ovs_type OVSPort
ovs_options tag=7 vlan_mode=native-untagged
ovs_mtu 9000
# Virtual interface to take advantage of originally untagged traffic
auto vlan7
iface vlan7 inet static
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=7
address 10.0.7.7
netmask 255.255.255.0
ovs_mtu 9000