Hi brains trust,
I have a 2 node cluster set up and working. Each node is connected to public internet via eno1 and to the other node (via a switch) via eno2. I have created two linux bridges vmbr0 and vmbr1, which bridge eno1 and eno2 respectively such that:
Node1:
- vmbr0: <public ip>
- vmbr1: 10.0.0.1/24
Node2:
- vmbr0: <public ip>
- vmbr1: 10.0.0.2/24
etc...
From the nodes I can ping the other node on either the public ip or the 10.0.0.x private address.
I currently have a working EVPN that is just using the public ip's as the peers. Any VM that attaches to a VNET in the EVPN is able to reach the public internet okay.
The problem with this set up is that all VM (node1)<-> VM(node2) traffic goes over the public internet which I want to avoid, instead I would like this routed via vmbr1. However, I still need each of the VM's to be able to route traffic to public internet.
I've tried (see config below) creating an EVPN controller with the vmbr1 ip's (10.0.0.1, 10.0.0.2) instead of the public ip's, and when creating the EVPN zone, I set node2 as the exit node. However whilst two VM's in this zone can communicate with each other, they can't ping public ip addresses - including the public ip address of the host node.
I'm sure I'm missing something obvious, and my networking knowledge is pretty basic, but I assume this would be a pretty common usecase? (keeping inter-vm traffic off public facing interface). Any assistance greatly appreciated! Let me know if you need any additional details.
NODE CONFIG:
Node /etc/network/interfaces
/etc/pve/sdn/controllers.cfg
/etc/pve/sdn/zones.cfg
/etc/pve/sdn/vnets.cfg
/etc/pve/sdn/subnets.cfg
VM Config:
/etc/netplan/config.yaml
I have a 2 node cluster set up and working. Each node is connected to public internet via eno1 and to the other node (via a switch) via eno2. I have created two linux bridges vmbr0 and vmbr1, which bridge eno1 and eno2 respectively such that:
Node1:
- vmbr0: <public ip>
- vmbr1: 10.0.0.1/24
Node2:
- vmbr0: <public ip>
- vmbr1: 10.0.0.2/24
etc...
From the nodes I can ping the other node on either the public ip or the 10.0.0.x private address.
I currently have a working EVPN that is just using the public ip's as the peers. Any VM that attaches to a VNET in the EVPN is able to reach the public internet okay.
The problem with this set up is that all VM (node1)<-> VM(node2) traffic goes over the public internet which I want to avoid, instead I would like this routed via vmbr1. However, I still need each of the VM's to be able to route traffic to public internet.
I've tried (see config below) creating an EVPN controller with the vmbr1 ip's (10.0.0.1, 10.0.0.2) instead of the public ip's, and when creating the EVPN zone, I set node2 as the exit node. However whilst two VM's in this zone can communicate with each other, they can't ping public ip addresses - including the public ip address of the host node.
I'm sure I'm missing something obvious, and my networking knowledge is pretty basic, but I assume this would be a pretty common usecase? (keeping inter-vm traffic off public facing interface). Any assistance greatly appreciated! Let me know if you need any additional details.
NODE CONFIG:
Node /etc/network/interfaces
Bash:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto vmbr0
iface vmbr0 inet static
address <public-ip>
gateway <public-ip-gw>
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr1
iface vmbr1 inet static
address 10.0.0.1/24
bridge-ports eno2
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
source /etc/network/interfaces.d/*
/etc/pve/sdn/controllers.cfg
Bash:
evpn: myevpn
asn 65000
peers 10.0.0.1, 10.0.0.2
/etc/pve/sdn/zones.cfg
Bash:
evpn: test
controller myevpn
vrf-vxlan 10000
disable-arp-nd-suppression 1
exitnodes node1,node2
exitnodes-primary node2
ipam pve
mac <hidden>
mtu 1350
/etc/pve/sdn/vnets.cfg
Code:
vnet: mynet
zone test
tag 10100
/etc/pve/sdn/subnets.cfg
Code:
subnet: test-10.20.20.0-24
vnet mynet
gateway 10.20.20.1
snat 1
VM Config:
/etc/netplan/config.yaml
YAML:
network:
version: 2
ethernets:
eth0:
addresses:
- 10.20.20.101/32
match:
macaddress: <hidden>
nameservers:
addresses:
- 8.8.8.8
routes:
- on-link: true
to: default
via: 10.20.20.1
set-name: eth0
Last edited: