VM's can't talk to network and I can't talk to VMs

Everything was working and now it doesn't. I have the following bridges:

auto lo

iface lo inet loopback



iface enp5s0f0 inet manual



iface enp5s0f1 inet manual



auto vmbr0

iface vmbr0 inet static

address 192.168.xx.x0

netmask 255.255.255.0

gateway 192.168.xx.x253

bridge-ports enp5s0f0

bridge-stp off

bridge-fd 0



auto vmbr1

iface vmbr1 inet static

address 192.168.xx.x1

netmask 255.255.255.0

bridge-ports enp5s0f1

bridge-stp off

bridge-fd 0

bridge-vlan-aware yes

bridge-vids 2-4094



I was able to get to the servers via icmp/ssh/rdp but now I cannot. They can't even ping eachother. This is racking my brain. I had it working and everything was fine until today. I haven't changed anything. THis goes for VMs and containers.
 
please post the output of `ip route`
and `ip addr`

do vmbr0 and vmbr1 have IPs from the same subnet? (this does not work as you might expect - only one gets the route and thus the reply packets for the containers)
 
default via 192.168.200.252 dev vmbr1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
192.168.200.0/24 dev vmbr0 proto kernel scope link src 192.168.200.90
192.168.200.0/24 dev vmbr1 proto kernel scope link src 192.168.200.91



Ip addr

root@pve1:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 00:25:90:49:ec:c8 brd ff:ff:ff:ff:ff:ff
3: enp5s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
link/ether 00:25:90:49:ec:c9 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:25:90:49:ec:c8 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.90/24 brd 192.168.200.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fe49:ecc8/64 scope link
valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:25:90:49:ec:c9 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.91/24 brd 192.168.200.255 scope global vmbr1
valid_lft forever preferred_lft forever
inet6 fe80::225:90ff:fe49:ecc9/64 scope link
valid_lft forever preferred_lft forever
6: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ce:78:8d:ea brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:ceff:fe78:8dea/64 scope link
valid_lft forever preferred_lft forever
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:fa:6a:66:e1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
16: veth2d3f9a9@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
link/ether 9e:e4:52:9d:36:cd brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::9ce4:52ff:fe9d:36cd/64 scope link
valid_lft forever preferred_lft forever
20: veth10c29d7@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
link/ether 3e:cb:53:f9:fc:87 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::3ccb:53ff:fef9:fc87/64 scope link
valid_lft forever preferred_lft forever
21: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
link/ether 1a:b5:78:40:fd:eb brd ff:ff:ff:ff:ff:ff
 
192.168.200.0/24 dev vmbr0 proto kernel scope link src 192.168.200.90
192.168.200.0/24 dev vmbr1 proto kernel scope link src 192.168.200.91
as written - having one subnet on 2 interfaces does not work as you might expect it - if you need all your VM's to be in one subnetwork - configure only one bridge and put them all into that. If you need 2 interfaces - chose 2 different subnets for both.
 
On a gut feeling - does it work with your current config if you enable ip_forward in sysctl?

Apart from that - I personally would definitely suggest using only one bringe into one network!
 
it appears the bridge is forwarding
root@pve1:~# brctl showstp vmbr1
vmbr1
bridge id 8000.00259049ecc9
designated root 8000.00259049ecc9
root port 0 path cost 0
max age 20.00 bridge max age 20.00
hello time 2.00 bridge hello time 2.00
forward delay 2.00 bridge forward delay 2.00
ageing time 300.00
hello timer 0.33 tcn timer 0.00
topology change timer 0.00 gc timer 37.29
flags


enp5s0f1 (1)
port id 8001 state forwarding
designated root 8000.00259049ecc9 path cost 4
designated bridge 8000.00259049ecc9 message age timer 0.00
designated port 8001 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags

tap103i0 (2)
port id 8002 state forwarding
designated root 8000.00259049ecc9 path cost 100
designated bridge 8000.00259049ecc9 message age timer 0.00
designated port 8002 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags


root@pve1:~# ^C
root@pve1:~# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
root@pve1:~# cat /proc/sys/net/ipv4/ip_forward
1
root@pve1:~#
 
weird. I spun up a quick container, from the host I can ping the container
root@pve1:~# ping 192.168.200.41
PING 192.168.200.41 (192.168.200.41) 56(84) bytes of data.
64 bytes from 192.168.200.41: icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from 192.168.200.41: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 192.168.200.41: icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from 192.168.200.41: icmp_seq=4 ttl=64 time=0.032 ms
64 bytes from 192.168.200.41: icmp_seq=5 ttl=64 time=0.044 ms
64 bytes from 192.168.200.41: icmp_seq=6 ttl=64 time=0.046 ms
64 bytes from 192.168.200.41: icmp_seq=7 ttl=64 time=0.031 ms
^C


From the container I ping the NIC address of the PVE host.
It's just leaving the host.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!