What I want to achieve:
How it was done previously:
Note that bullet 3 was unachievable, hence the need to move to Open vSwitch.
My new OVS config is like this:
And not only it doesn't work as I expected, I'm already both out of applicable to my usecase documentation (OVS+ifupdown seems like uncommon solution outside of PVE, everything else is either ovs-* CLI tools, or libvirt/openstack related) and confused AF, since I was replicating official doc, and it doesn't work as expected.
What doesn't work:
1. I can only select vmbr0, and all OVSIntPorts are unselectable. And there's conficting information - one source says that IntPorts are for management, and therefore unaccessible to guest VM by design, other (to be precise - Ex. 1) suggests that admin should define OVSBridge, OVSIntPorts, and said bridge, together with VLAN pseudointerfaces, will be selectable from networks list in VM properties. I just want it the same way as old Linux bridges behaved.
2. Even if I, say, resolve this problem, I have a feeling that CHR interface won't allow intra-cluster connections between VMs in single VLANs, as described in bullet 3. I saw guides about creating GRE tunnels in order to do this, but can I just config OVS to do it instead? I mean, they are inside single physical switch, within single port group, and why would I want another L2 abstraction? Besides, tunnel solution won't scale. With 3 nodes it's manageable. With 5 - too. Anything bigger will quickly become slow, error-prone and simply inefficient, and I don't know anything about OpenFlow, that further complicates the task.
My brain is completely scrambled by this point. Is there any sane way, sane configs, sane docs to accomplish this?
P.S. Yes, I do have openvswitch-switch package and recent enough Proxmox VE, but since you're gonna ask it anyway, here:
- Several predefined interfaces available to choose from, which have static VLAN ID set up, and neither guest VM nor PVE are managing/aware of VLAN assignment. They are needed to avoid manual labor with giving access to common VLANs. Vast majority of VMs uses them for external IP assignments, manually filling them out would be a major PITA.
- One interface, which is virtual and VLAN-aware, and VLAN ID assignment is done via Proxmox GUI. It is managed by MikroTik CHR, which uses same interface, but is untagged and manages VLAN separation and traffic inside it.
- VMs on said virtual interface and with same VLAN ID can connect to each other, even when they are on different Proxmox nodes.
How it was done previously:
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto vmbr0
iface vmbr0 inet manual
bridge_ports eno1.40
bridge_stp off
bridge_fd 0
#Predefined
auto vmbr1
iface vmbr1 inet static
address 172.11.11.11
netmask 255.255.255.0
gateway 172.11.11.1
bridge_ports eno1.30
bridge_stp off
bridge_fd 0
#Management
auto vmbr2
iface vmbr2 inet static
address 198.51.100.11 #actually unused
netmask 255.255.255.0 #actually unused
bridge_ports none
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes
#CHR
My new OVS config is like this:
Code:
auto lo
iface lo inet loopback
auto eno1
allow-vmbr0 eno1
iface eno1 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
ovs_options tag=40 vlan_mode=native-untagged
ovs_mtu 9000
auto eno2
allow-vmbr0 eno2
iface eno2 inet manual
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
pre-up ( ifconfig eno1 mtu 9000 && ifconfig eno2 mtu 9000 )
ovs_ports eno1 vlan30 vlan700 vlan40
ovs_mtu 9000
#Central Bridge
allow-vmbr0 vlan30
iface vlan30 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=30
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 172.11.11.11
netmask 255.255.255.0
gateway 172.11.11.1
ovs_mtu 9000
#Management
allow-vmbr0 vlan700
iface vlan700 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
ovs_mtu 9000
address 198.51.100.11
netmask 255.255.255.0
#CHR
allow-vmbr0 vlan40
iface vlan40 inet manual
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=40
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
ovs_mtu 9000
#Predefined
What doesn't work:
1. I can only select vmbr0, and all OVSIntPorts are unselectable. And there's conficting information - one source says that IntPorts are for management, and therefore unaccessible to guest VM by design, other (to be precise - Ex. 1) suggests that admin should define OVSBridge, OVSIntPorts, and said bridge, together with VLAN pseudointerfaces, will be selectable from networks list in VM properties. I just want it the same way as old Linux bridges behaved.
2. Even if I, say, resolve this problem, I have a feeling that CHR interface won't allow intra-cluster connections between VMs in single VLANs, as described in bullet 3. I saw guides about creating GRE tunnels in order to do this, but can I just config OVS to do it instead? I mean, they are inside single physical switch, within single port group, and why would I want another L2 abstraction? Besides, tunnel solution won't scale. With 3 nodes it's manageable. With 5 - too. Anything bigger will quickly become slow, error-prone and simply inefficient, and I don't know anything about OpenFlow, that further complicates the task.
My brain is completely scrambled by this point. Is there any sane way, sane configs, sane docs to accomplish this?
P.S. Yes, I do have openvswitch-switch package and recent enough Proxmox VE, but since you're gonna ask it anyway, here:
proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-9
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-9
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2