I have a Proxmox server, and I want to add VMs to this host so they can communicate with each other on a private network using eth1 and have internet access via eth0.
On this host, I have vmbr0 with the following configuration:
I have one bond and two VLANs on this configuration:
These are my routes on the host:
After creating my VMs, I can access the internet through eth0, but the VMs cannot see each other on the private network.
This is the netplan configuration for my VMs:
and this is routes on on of the vms:
I would really appreciate any guidance or suggestions on how to resolve this issue. I'm not sure what might be causing the VMs to not communicate with each other on the private network, even though they can access the internet through eth0. Any help would be greatly appreciated!
On this host, I have vmbr0 with the following configuration:
auto lo
iface lo inet loopback
auto vmbr0
iface vmbr0 inet static
bridge-ports bond0
bridge-stp on
bridge-vlan-aware yes
bridge-vids 2-1000
bridge-fd 0
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface eno49 inet manual
iface eno50 inet manual
iface ens19 inet manual
iface enp0s19 inet manual
I have one bond and two VLANs on this configuration:
auto bond0
iface bond0
bond-slaves eno49 eno50 eno1 eno2 eno3 eno4
bond-downdelay 200
bond-miimon 100
bond-updelay 200
bond-lacp-rate 1
bond-mode 4
auto vlan801
iface vlan801 inet static
vlan-raw-device vmbr0
address 185.X.X.X/28
netmask 255.255.255.240
post-up ip r add default via 185.X.X.X dev vlan801
mtu 1500
auto vlan81
iface vlan81 inet static
vlan-raw-device vmbr0
address 172.20.20.22/24
netmask 255.255.255.0
These are my routes on the host:
default via 185.X.X.X dev vlan801
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-15d69140143b proto kernel scope link src 172.18.0.1
172.20.20.0/24 dev vlan81 proto kernel scope link src 172.20.20.22
After creating my VMs, I can access the internet through eth0, but the VMs cannot see each other on the private network.
This is the netplan configuration for my VMs:
network:
version: 2
ethernets:
eth0:
addresses:
- 185.x.x.x
gateway4: 185.x.x.x
match:
macaddress: 4e:fa:cf:12:44:eb
nameservers:
addresses:
- 8.8.4.4
search: []
set-name: eth0
eth1:
addresses:
- 172.20.20.17/24
match:
macaddress: e6:df:87:03:fe:c6
set-name: eth1
and this is routes on on of the vms:
default via 185.228.238.113 dev eth0 proto static
172.20.20.0/24 dev eth1 proto kernel scope link
I would really appreciate any guidance or suggestions on how to resolve this issue. I'm not sure what might be causing the VMs to not communicate with each other on the private network, even though they can access the internet through eth0. Any help would be greatly appreciated!
Last edited: