strange network behavior?

chupacabra

New Member
Apr 9, 2023
29
0
1
A node configured with 3 interfaces, each in a separate subnet as below:

Code:
auto lo
iface lo inet loopback

iface enp0s31f6 inet manual
#NiC on motherboard

iface wlo1 inet manual

auto enp1s0
iface enp1s0 inet manual
        mtu 9000
#10GbE Card Port 0

auto enp1s0d1
iface enp1s0d1 inet manual
        mtu 9000
#10GbE Card Port 1

auto enp6s0f0
iface enp6s0f0 inet manual
#1GbE Card Port 0

auto enp6s0f1
iface enp6s0f1 inet manual
#1GbE Card Port 1

auto bond0
iface bond0 inet manual
        bond-slaves enp6s0f0 enp6s0f1
        bond-miimon 100
        bond-mode 802.3ad
#LACP of 1GbE Card Ports

auto vmbr0
iface vmbr0 inet manual
        address 10.10.100.100/24
        gateway 10.10.100.1
        bridge-ports enp0s31f6
        bridge-stp off
        bridge-fd 0
#Bridge on motherboard NIC

auto vmbr1
iface vmbr1 inet static
        address 10.10.10.20/24
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Bridge on bond0

auto vmbr2
iface vmbr2 inet static
        address 10.10.50.100/24
        bridge-ports enp1s0d1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-124
        mtu 9000
#Bridge for 10GbE Ports

as soon as i enable this configuration, i can ping all interfaces:
Code:
 ~  ping 10.10.100.100                                                                                                                                                                                                     ✔  06:12:47 PM 
PING 10.10.100.100 (10.10.100.100): 56 data bytes
64 bytes from 10.10.100.100: icmp_seq=0 ttl=64 time=2.318 ms
64 bytes from 10.10.100.100: icmp_seq=1 ttl=64 time=2.754 ms
64 bytes from 10.10.100.100: icmp_seq=2 ttl=64 time=3.108 ms
^C
--- 10.10.100.100 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.318/2.727/3.108/0.323 ms
 ~  ping 10.10.10.20                                                                                                                                                                                                       ✔  06:12:58 PM 
PING 10.10.10.20 (10.10.10.20): 56 data bytes
64 bytes from 10.10.10.20: icmp_seq=0 ttl=64 time=2.928 ms
64 bytes from 10.10.10.20: icmp_seq=1 ttl=64 time=2.683 ms
^C
--- 10.10.10.20 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.683/2.805/2.928/0.122 ms
 ~  ping 10.10.50.50                                                                                                                                                                                                       ✔  06:13:05 PM 
PING 10.10.50.50 (10.10.50.50): 56 data bytes
64 bytes from 10.10.50.50: icmp_seq=0 ttl=63 time=3.157 ms
64 bytes from 10.10.50.50: icmp_seq=1 ttl=63 time=2.859 ms
^C
--- 10.10.50.50 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.859/3.008/3.157/0.149 ms

but i can only access the management UI through the 10.10.10.20 interface. any ideas as to what may be happening?
 
i was running some more ping test, this time from the node itself.

Code:
root@pve01:~# ping google.com -I vmbr0
PING google.com (74.125.138.138) from 10.10.100.100 vmbr0: 56(84) bytes of data.
64 bytes from yi-in-f138.1e100.net (74.125.138.138): icmp_seq=1 ttl=55 time=35.0 ms
64 bytes from yi-in-f138.1e100.net (74.125.138.138): icmp_seq=2 ttl=55 time=34.1 ms
64 bytes from yi-in-f138.1e100.net (74.125.138.138): icmp_seq=3 ttl=55 time=32.7 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2483ms
rtt min/avg/max/mdev = 32.720/33.933/34.959/0.923 ms
root@pve01:~# ping google.com -I vmbr1
PING google.com (74.125.138.138) from 10.10.10.20 vmbr1: 56(84) bytes of data.
^C
--- google.com ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5107ms
pipe 3
root@pve01:~# ping google.com -I vmbr2
PING google.com (74.125.138.113) from 10.10.50.100 vmbr2: 56(84) bytes of data.
^C
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3065ms

seems the only interface that gets out to the internet is vmbr0, but pinging the router two of the three interfaces work?

Code:
root@pve01:~# ping 10.10.1.1 -I vmbr0
PING 10.10.1.1 (10.10.1.1) from 10.10.100.100 vmbr0: 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=0.162 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.148 ms
^C
--- 10.10.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.148/0.155/0.162/0.007 ms
root@pve01:~# ping 10.10.1.1 -I vmbr1
PING 10.10.1.1 (10.10.1.1) from 10.10.10.20 vmbr1: 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=0.131 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.186 ms
^C
--- 10.10.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1007ms
rtt min/avg/max/mdev = 0.131/0.158/0.186/0.027 ms
root@pve01:~# ping 10.10.1.1 -I vmbr2
PING 10.10.1.1 (10.10.1.1) from 10.10.50.100 vmbr2: 56(84) bytes of data.
^C
--- 10.10.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4084ms
pipe 4
 
Last edited:
This behavior is normal.
You have one gateway 10.10.100.1 and can ping outside from interface at the same network, vmbr0.
Other interfaces has no gateways (and should not in common cases), so network stack do not know where packets from that interfaces needs to be routed. From that interfaces you can ping addresses only in corresponding networks 10.10.10.0/24 for vmbr1 and 10.10.50.0/24 for vmbr2.
You say you ping your router 10.10.1.1, but I don't see any interfaces having address from 10.10.1.0 network.
 
This behavior is normal.
You have one gateway 10.10.100.1 and can ping outside from interface at the same network, vmbr0.
Other interfaces has no gateways (and should not in common cases), so network stack do not know where packets from that interfaces needs to be routed. From that interfaces you can ping addresses only in corresponding networks 10.10.10.0/24 for vmbr1 and 10.10.50.0/24 for vmbr2.
You say you ping your router 10.10.1.1, but I don't see any interfaces having address from 10.10.1.0 network.
thanks for that explanation, but it seems that something else is going on with the networking. i have a NAS at 10.10.50.50 and tried pinging it via all the interfaces. the only one that responds is vmbr0 which is the one with the gateway:

Code:
root@pve01:~# ping 10.10.50.50 -I vmbr0
PING 10.10.50.50 (10.10.50.50) from 10.10.100.100 vmbr0: 56(84) bytes of data.
64 bytes from 10.10.50.50: icmp_seq=1 ttl=63 time=0.591 ms
64 bytes from 10.10.50.50: icmp_seq=2 ttl=63 time=0.399 ms
64 bytes from 10.10.50.50: icmp_seq=3 ttl=63 time=0.247 ms
64 bytes from 10.10.50.50: icmp_seq=4 ttl=63 time=0.492 ms
^C
--- 10.10.50.50 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3066ms
rtt min/avg/max/mdev = 0.247/0.432/0.591/0.126 ms
root@pve01:~# ping 10.10.50.50 -I vmbr1
PING 10.10.50.50 (10.10.50.50) from 10.10.10.20 vmbr1: 56(84) bytes of data.
^C
--- 10.10.50.50 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3062ms
pipe 4
root@pve01:~# ping 10.10.50.50 -I vmbr2
PING 10.10.50.50 (10.10.50.50) from 10.10.50.100 vmbr2: 56(84) bytes of data.
^C
--- 10.10.50.50 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4083ms
 
TTL=63, seems that pings goes through router.
Try traceroute, not ping.
 
Also, once the settings are applied, i can only access the UI via the 10.10.10.20 addresses. Not sure
 
traceroute to the NAS yields similar results where vmbr0 can reach it, but the others can't


Code:
root@pve01:~# traceroute 10.10.50.50 -i vmbr0
traceroute to 10.10.50.50 (10.10.50.50), 30 hops max, 60 byte packets
 1  unifi-100.local.beachbox.casa (10.10.100.1)  0.268 ms  0.239 ms  0.234 ms
 2  10.10.50.50 (10.10.50.50)  1.446 ms * *
root@pve01:~# traceroute 10.10.50.50 -i vmbr1
traceroute to 10.10.50.50 (10.10.50.50), 30 hops max, 60 byte packets
 1  * * *
 2  * * *
 3  * * *
 4  * * *
 5  * * *
 6  * * *
 7  * * *
 8  * * *
 9  * * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *
root@pve01:~# traceroute 10.10.50.50 -i vmbr2
traceroute to 10.10.50.50 (10.10.50.50), 30 hops max, 60 byte packets
 1  * * *
 2  * * *
 3  * * *
 4  * * *
 5  * * *
 6  * * *
 7  * * *
 8  * * *
 9  * * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *
 
Why are you even trying to ping via specific interfaces? The OS has a routing table that decides which interface to send the packet through, based on the configured addresses and subnets and manually configured static routes. So in general you cant reach every address through every interface and that is fine.

Also keep in mind that in order to ping/connect to any address, the reverse route has to be valid too. If you use a command like ping 10.10.1.1 -I vmbr2 then it sends an ICMP packet through vmbr2 with destination address 10.10.1.1 and source address 10.10.50.100 (because that is the IP of the interace vmbr2).
If your recieving machine then does not have a valid way to send back the response to the original address 10.10.1.1 then the ping command will fail even though these machines have a valid way to reach eachother via normal means.

Even worse, most stateful firewalls demand that both directions of TCP connections go the same path, so even if you do some weird routing shenanigans, you might end up in a state where you can actually ping stuff but "real" traffic won't work as expected.
 
Last edited:
  • Like
Reactions: Neobin and Chris
Why are you even trying to ping via specific interfaces? The OS has a routing table that decides which interface to send the packet through, based on the configured addresses and subnets and manually configured static routes. So in general you cant reach every address through every interface and that is fine.

Also keep in mind that in order to ping/connect to any address, the reverse route has to be valid too. If you use a command like ping 10.10.1.1 -I vmbr2 then it sends an ICMP packet through vmbr2 with destination address 10.10.1.1 and source address 10.10.50.100 (because that is the IP of the interace vmbr2).
If your recieving machine then does not have a valid way to send back the response to the original address 10.10.1.1 then the ping command will fail even though these machines have a valid way to reach eachother via normal means.

Even worse, most stateful firewalls demand that both directions of TCP connections go the same path, so even if you do some weird routing shenanigans, you might end up in a state where you can actually ping stuff but "real" traffic won't work as expected.
The reason i am testing via ping and whatnot is because i am trying to mount an NFS share from my nas on 10.10.50.50. i have an interface on that subnet and i still cannot access it.

I would also like to know why my nodes don't bring up the UI on the 10.10.100.x or 10.10.50.x interfaces, but they do on the 10.10.10.x one. it makes no sense to me at all...
 
Last edited:
You have VLAN and MTU settings on the vmbr2, check these settings on your router/switch and nas.
As mentioned above, there is no needed ping from specified interface, system choose interface itself.
And do not connect NAS via router, use switch or direct connection.
 
You have VLAN and MTU settings on the vmbr2, check these settings on your router/switch and nas.
As mentioned above, there is no needed ping from specified interface, system choose interface itself.
And do not connect NAS via router, use switch or direct connection.

checked the switch and NAS, all were set to MTU 9000. to simplify things, i removed jumbo frames from all, but still no love on mounting the NAS on 10.10.50.50
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!