[SOLVED] VE 7.1-10 slow to forward ARP replies over bridge

bmernz

Member
Mar 11, 2022
15
5
8
Hi,

After starting a container, a PING from inside the container to any address beyond the host it can take up to a few minutes before the replies start flowing.

Code:
root@deb-11-container:~# ping 172.20.1.1
PING 172.20.1.1 (172.20.1.1) 56(84) bytes of data.
From 172.20.1.181 icmp_seq=1 Destination Host Unreachable
From 172.20.1.181 icmp_seq=2 Destination Host Unreachable
From 172.20.1.181 icmp_seq=3 Destination Host Unreachable
.......
From 172.20.1.181 icmp_seq=139 Destination Host Unreachable
From 172.20.1.181 icmp_seq=140 Destination Host Unreachable
From 172.20.1.181 icmp_seq=141 Destination Host Unreachable
64 bytes from 172.20.1.1: icmp_seq=142 ttl=64 time=2049 ms
64 bytes from 172.20.1.1: icmp_seq=143 ttl=64 time=1025 ms
64 bytes from 172.20.1.1: icmp_seq=144 ttl=64 time=1.05 ms
........
64 bytes from 172.20.1.1: icmp_seq=167 ttl=64 time=0.299 ms
64 bytes from 172.20.1.1: icmp_seq=168 ttl=64 time=0.298 ms
64 bytes from 172.20.1.1: icmp_seq=169 ttl=64 time=0.417 ms
^C
--- 172.20.1.1 ping statistics ---
169 packets transmitted, 28 received, +141 errors, 83.432% packet loss, time 171974ms
rtt min/avg/max/mdev = 0.298/110.251/2048.924/418.693 ms, pipe 4
root@deb-11-container:~#

The network address is 172.20.1.0/24, with a gateway of 172.20.1.1

The host address is 172.20.1.191

The container address is 172.20.1.181

The host and container are connected via vmbr0.

The host can immediately ping all addresses.

I have run packet captures against both vmbr0 and veth100i0 and can see that the ARP replies are only forwarded after a delay.

Attached are the relevant packet captures showing the replies are present on vmbr0, but not forwarded to veth100i0.

vmbr0:
Code:
No.    Time    Source    Destination    Protocol    Length    Info
317    80.873525    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
318    80.873956    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
319    80.873990    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
320    80.874202    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
321    81.897527    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
322    81.898458    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
323    81.898458    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
324    81.898458    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
325    81.898486    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=79/20224, ttl=64 (reply in 328)
326    81.898486    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=80/20480, ttl=64 (reply in 329)
327    81.898487    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=81/20736, ttl=64 (reply in 330)
328    81.898788    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=79/20224, ttl=64 (request in 325)
329    81.898844    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=80/20480, ttl=64 (request in 326)
330    81.898844    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=81/20736, ttl=64 (request in 327)

veth100i0:
Code:
No.    Time    Source    Destination    Protocol    Length    Info
158    79.850065    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
159    80.873550    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
160    80.873994    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
161    81.897552    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
162    81.898489    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
163    81.898498    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
164    81.898511    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=79/20224, ttl=64 (reply in 167)
165    81.898511    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=80/20480, ttl=64 (reply in 168)
166    81.898512    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=81/20736, ttl=64 (reply in 169)
167    81.898820    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=79/20224, ttl=64 (request in 164)
168    81.898874    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=80/20480, ttl=64 (request in 165)

I have tested using debian 10 and debian 11 container templates with the same result.

Am I doing something wrong?
 

Attachments

  • pcaps.zip
    5 KB · Views: 0
Last edited:
Did you check from the `/etc/resolv.conf` inside the container?
Is the issue only occurring from the Containers?
 
Hi Moayad,

Thanks for taking a look.

I chose 'use host settings' for DNS when creating the container, so the resolver matches the host.

I checked the containers settings below:

Code:
root@deb-11-container:~# cat /etc/resolv.conf
# --- BEGIN PVE ---
search dr.<***obfuscated***>
nameserver 172.20.1.1
# --- END PVE ---
root@deb-11-container:~#

I'm not quite sure how this relates though as I am only testing PING with IP Addresses at this point.

I have only tested this with containers based on the Debian 10 and 11 images so far, I will try a VM and post my findings shortly.
 
OK, So I have the same issue from the debian 11 net install iso in a VM.

I switched to virtual console 2 during the install and could ping the host, but not anything beyond vmbr0 until a few minutes worth of ARP requests go unanswered and then once the first reply arrives it comes right:

1647297418913.png
 
I notice this issue only occurs after the container or vm is restarted.

I have had a container running, but idle for several days now and if I start a ping to and address that eventually worked earlier, it works immediately now.

However if I restart a container, the delay is back, and recovers in anywhere between 30 and 300 pings.

It seems to be an issue with the bridge not forwarding the ARP replies immediately, however the option bridge_fd defaults to 0.

I have worked through this helpful article but have not identified anything out of order.

I also found this article which describes a similar issue. It does not say it recovers after time, however the ping may not have been left running for long enough to know. I am not sure how Proxmox sets up the veth pairs and namespaces either so I'd be grateful if anyone with experience here could clarify for me.
 
Hi,

After starting a container, a PING from inside the container to any address beyond the host it can take up to a few minutes before the replies start flowing.

Code:
root@deb-11-container:~# ping 172.20.1.1
PING 172.20.1.1 (172.20.1.1) 56(84) bytes of data.
From 172.20.1.181 icmp_seq=1 Destination Host Unreachable
From 172.20.1.181 icmp_seq=2 Destination Host Unreachable
From 172.20.1.181 icmp_seq=3 Destination Host Unreachable
.......
From 172.20.1.181 icmp_seq=139 Destination Host Unreachable
From 172.20.1.181 icmp_seq=140 Destination Host Unreachable
From 172.20.1.181 icmp_seq=141 Destination Host Unreachable
64 bytes from 172.20.1.1: icmp_seq=142 ttl=64 time=2049 ms
64 bytes from 172.20.1.1: icmp_seq=143 ttl=64 time=1025 ms
64 bytes from 172.20.1.1: icmp_seq=144 ttl=64 time=1.05 ms
........
64 bytes from 172.20.1.1: icmp_seq=167 ttl=64 time=0.299 ms
64 bytes from 172.20.1.1: icmp_seq=168 ttl=64 time=0.298 ms
64 bytes from 172.20.1.1: icmp_seq=169 ttl=64 time=0.417 ms
^C
--- 172.20.1.1 ping statistics ---
169 packets transmitted, 28 received, +141 errors, 83.432% packet loss, time 171974ms
rtt min/avg/max/mdev = 0.298/110.251/2048.924/418.693 ms, pipe 4
root@deb-11-container:~#

The network address is 172.20.1.0/24, with a gateway of 172.20.1.1

The host address is 172.20.1.191

The container address is 172.20.1.181

The host and container are connected via vmbr0.

The host can immediately ping all addresses.

I have run packet captures against both vmbr0 and veth100i0 and can see that the ARP replies are only forwarded after a delay.

Attached are the relevant packet captures showing the replies are present on vmbr0, but not forwarded to veth100i0.

vmbr0:
Code:
No.    Time    Source    Destination    Protocol    Length    Info
317    80.873525    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
318    80.873956    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
319    80.873990    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
320    80.874202    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
321    81.897527    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
322    81.898458    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
323    81.898458    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
324    81.898458    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
325    81.898486    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=79/20224, ttl=64 (reply in 328)
326    81.898486    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=80/20480, ttl=64 (reply in 329)
327    81.898487    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=81/20736, ttl=64 (reply in 330)
328    81.898788    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=79/20224, ttl=64 (request in 325)
329    81.898844    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=80/20480, ttl=64 (request in 326)
330    81.898844    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=81/20736, ttl=64 (request in 327)

veth100i0:
Code:
No.    Time    Source    Destination    Protocol    Length    Info
158    79.850065    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
159    80.873550    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
160    80.873994    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
161    81.897552    c6:bc:ef:ae:e6:fc    Broadcast    ARP    42    Who has 172.20.1.1? Tell 172.20.1.181
162    81.898489    VMware_8d:e3:1b    c6:bc:ef:ae:e6:fc    ARP    60    172.20.1.1 is at 00:00:5e:00:01:03
163    81.898498    c6:bc:ef:ae:e6:fc    Broadcast    ARP    60    Who has 172.20.1.1? Tell 172.20.1.181
164    81.898511    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=79/20224, ttl=64 (reply in 167)
165    81.898511    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=80/20480, ttl=64 (reply in 168)
166    81.898512    172.20.1.181    172.20.1.1    ICMP    98    Echo (ping) request  id=0xc41d, seq=81/20736, ttl=64 (reply in 169)
167    81.898820    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=79/20224, ttl=64 (request in 164)
168    81.898874    172.20.1.1    172.20.1.181    ICMP    98    Echo (ping) reply    id=0xc41d, seq=80/20480, ttl=64 (request in 165)

I have tested using debian 10 and debian 11 container templates with the same result.

Am I doing something wrong?
Hi, so, in this example, this is the arp reply from your gateway, not going to veth100i0 ?
is the firewall checkbox enabled on container nic ?
can you send result of : #brctl show ?
 
Thanks for taking a look!

Yes, the gateway (172.20.1.1) is off host.
The host is on the bridge (172.20.1.191) vmbr0
The container is (172.20.1.181) veth100i0

The packet capture shows the ARP reply arrive on vmbr0, but not get forwarded to veth100i0 for a variable amount of time.

The firewall is disabled on the container, host and datacenter.

Code:
root@pve-01:~# brctl show
bridge name     bridge id               STP enabled     interfaces
vmbr0           8000.0050568da31c       no              ens192
                                                        veth100i0
vmbr1           8000.0050568dcea6       no              ens224
vmbr2           8000.0050568db1a4       no              ens256

Code:
root@pve-01:~# brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.04
  1     00:0c:29:1a:7c:d4       no                 5.20
  1     00:50:56:53:82:62       no                 0.55
  1     00:50:56:56:c2:4c       no                65.31
  1     00:50:56:57:eb:7c       no                 8.60
  1     00:50:56:5e:64:73       no                 1.26
  1     00:50:56:8d:2c:dd       no                18.72
  1     00:50:56:8d:4e:b7       no                18.74
  1     00:50:56:8d:6e:37       no                47.79
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:c2:d5       no                 0.55
  1     00:50:56:8d:d2:58       no                 0.48
  1     00:50:56:8d:e3:1b       no                 0.00
  1     00:50:56:8d:f1:96       no                 0.97
  1     00:c0:ff:12:10:11       no                53.88
  1     38:ea:a7:36:9d:a4       no               100.02
  1     38:ea:a7:36:a0:84       no                59.01
  1     38:ea:a7:37:02:80       no                 0.89
  1     38:ea:a7:38:3d:fc       no                10.01
  1     ac:16:2d:a8:ae:3a       no                53.41
  1     e0:1a:ea:44:18:11       no                 0.29
  1     e0:1a:ea:53:a4:74       no                 0.47
  2     fe:46:bb:6d:34:ae       yes                0.00
  2     fe:46:bb:6d:34:ae       yes                0.00

Code:
root@pve-01:~# brctl showstp vmbr0
vmbr0
 bridge id              8000.0050568da31c
 designated root        8000.0050568da31c
 root port                 0                    path cost                  0
 max age                  20.00                 bridge max age            20.00
 hello time                2.00                 bridge hello time          2.00
 forward delay             0.00                 bridge forward delay       0.00
 ageing time             300.00
 hello timer               0.00                 tcn timer                  0.00
 topology change timer     0.00                 gc timer                   0.00
 flags


ens192 (1)
 port id                8001                    state                forwarding
 designated root        8000.0050568da31c       path cost                  2
 designated bridge      8000.0050568da31c       message age timer          0.00
 designated port        8001                    forward delay timer        0.00
 designated cost           0                    hold timer                 0.00
 flags

veth100i0 (2)
 port id                8002                    state                forwarding
 designated root        8000.0050568da31c       path cost                  2
 designated bridge      8000.0050568da31c       message age timer          0.00
 designated port        8002                    forward delay timer        0.00
 designated cost           0                    hold timer                 0.00
 flags

root@pve-01:~#
 
Hi,
I'm unable to reproduce on my side (proxmox 7.1, kernel 5.15.19-2-pve, debian 11 CT).

I don't have any arp reply drop.

could you try to see if adding a static mac in the bridge could help ?

Code:
# bridge fdb append <mac> dev veth100i0 master static
 
Hi,

Thanks again for following up!

Here are the version numbers I could see in my install - in case they are relevant:

Code:
Linux 5.13.19-6-pve #1 SMP PVE 5.13.19-14
pve-manager/7.1-10/6ddebafe

I note you are be using the optional 5.15 kernel.

I have just now tried the following without luck:

Shutdown container
Check container MAC: C6:BC:EF:AE:E6:FC
Check MAC is not in bridge table:
Code:
brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.20
...
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
....
  1     e0:1a:ea:53:a4:74       no                 0.50
Start Contianer and re-check bridge:
Code:
brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.69
...
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
...
  1     c6:bc:ef:ae:e6:fc       no                 2.52
  1     e0:1a:ea:44:18:11       no                 0.38
  1     e0:1a:ea:53:a4:74       no                 0.20
  2     fe:26:88:ef:42:cd       yes                0.00
  2     fe:26:88:ef:42:cd       yes                0.00
The container MAC c6:bc:ef:ae:e6:fc is present - 5th row from last, but IMPORTANTLY NOT on port 2!!!

This seems like a bug!?!

Try adding the static mapping:
Code:
root@pve-01:~# bridge fdb append c6:bc:ef:ae:e6:fc dev veth100i0 master static
root@pve-01:~# brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.36
...
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
...
  2     c6:bc:ef:ae:e6:fc       no                 0.00
  2     c6:bc:ef:ae:e6:fc       no                 0.00
  1     e0:1a:ea:44:18:11       no                 0.68
  1     e0:1a:ea:53:a4:74       no                 0.55
  2     fe:26:88:ef:42:cd       yes                0.00
  2     fe:26:88:ef:42:cd       yes                0.00

And now there are two entires for that mac, and they are on the correct port!

The host I am pinging 172.20.1.1 is at 00:00:5e:00:01:03 which is on the first row.

However, the container still cannot ping until after the delay and the ARP reply is delivered...

Code:
From 172.20.1.181 icmp_seq=82 Destination Host Unreachable
From 172.20.1.181 icmp_seq=83 Destination Host Unreachable
From 172.20.1.181 icmp_seq=84 Destination Host Unreachable
64 bytes from 172.20.1.1: icmp_seq=85 ttl=64 time=2049 ms
64 bytes from 172.20.1.1: icmp_seq=86 ttl=64 time=1025 ms
64 bytes from 172.20.1.1: icmp_seq=87 ttl=64 time=0.675 ms
64 bytes from 172.20.1.1: icmp_seq=88 ttl=64 time=0.391 ms
64 bytes from 172.20.1.1: icmp_seq=89 ttl=64 time=0.441 ms
^C
--- 172.20.1.1 ping statistics ---
89 packets transmitted, 5 received, +84 errors, 94.382% packet loss, time 90119ms
rtt min/avg/max/mdev = 0.391/614.953/2048.615/819.253 ms, pipe 4

At which time the bridge table still looks like this:
Code:
root@pve-01:~# brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.34
...
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
...
  2     c6:bc:ef:ae:e6:fc       no                 0.00
  2     c6:bc:ef:ae:e6:fc       no                 0.00
  1     e0:1a:ea:44:18:11       no                 0.66
  1     e0:1a:ea:53:a4:74       no                 0.53
  2     fe:26:88:ef:42:cd       yes                0.00
  2     fe:26:88:ef:42:cd       yes                0.00

At this point I wondered if without the static assignment, the bridge would correct the port when the ping started working.

I kept an eye on the output of brctl showmacs vmbr0 and sure enough the port changed to 2 the instant the ping started working:
Code:
root@pve-01:~# brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.28
...
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
...
  1     c6:bc:ef:ae:e6:fc       no                 0.71
  1     e0:1a:ea:44:18:11       no                 0.99
  1     e0:1a:ea:53:a4:74       no                 0.18
  2     fe:2d:16:1b:c1:bf       yes                0.00
  2     fe:2d:16:1b:c1:bf       yes                0.00
root@pve-01:~# brctl showmacs vmbr0
port no mac addr                is local?       ageing timer
  1     00:00:5e:00:01:03       no                 0.58
...
  1     00:50:56:8d:a3:1c       yes                0.00
  1     00:50:56:8d:a3:1c       yes                0.00
...
  2     c6:bc:ef:ae:e6:fc       no                 0.89
  1     e0:1a:ea:44:18:11       no                 0.41
  1     e0:1a:ea:53:a4:74       no                 0.51
  2     fe:2d:16:1b:c1:bf       yes                0.00
  2     fe:2d:16:1b:c1:bf       yes                0.00
root@pve-01:~#

This is really weird!

Do you think this a bug in the linux bridge? or is it the way I have it configured? or is it the way the interfaces are stood up when the containers/vm's start?
 
I really don't known, I never see this before. (I have tested 5.10, 5.13, 5.15 kernel to be sure, I can't reproduce)


could you use "bridge fdb show" to see exactly the mapping of mac <-> interfaces (instead port).
(just to be sure of ports of 5th row: "The container MAC c6:bc:ef:ae:e6:fc is present - 5th row from last, but IMPORTANTLY NOT on port 2!!!")

what coud happend, if the mac is not on the correct port, is that the arp reply is forwarded to this wrong port. Until the bridge ageing timeout expire the entry, then the packet is flooded to all ports and CT finally see the packet after some tme.


when the container is shut, the veth interface is destroyed, so the mac associated on the bridge should be destroyed too. (so you shouldn't see any reference to this mac in the bridge)

when the container start, the mac should be present on bridge on the veth interface once 1 packet is send by the container. (arp request, arp reply, or any other packet sent by the ct).

I don't see why the mac could be present on another port. (until you have a duplicate mac on your network)
 
Last edited:
OK. So with your help and a couple of relevant posts I found elsewhere I figured it out!

It isn't a duplicate MAC.

I realised that the bridge is correctly seeing the MAC on the external interface and learning it, because the ARP is being reflected back to it by the external switch.

I got a clue from this: https://bugs.launchpad.net/neutron/+bug/1738659

The host I'm testing this all on is a VMware 6.5 cluster, and this is caused by the way the Virtual Distributed Switch handles MAC's in this version.

As I discovered here: vSphere-vNetwork-Discussions/Vswitch-sending-ARP-Packet-back-from-Source

So the solution was to install a "fling" that lets the VDS learn and manage MAC's in a way that is more consistent with a real switch: esxi-learnswitch-enhancement-to-the-esxi-mac-learn-dvfilter

Apparently this is no longer required in vSphere 6.7 and above.

Once installed and configured, the Proxmox bridge works as expected - now that the upstream switch is behaving correctly!

Thank you for taking the time to look at this with me!

I hope this thread is of use to someone else at some stage.
 
  • Like
Reactions: spirit

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!