Networking somewhat broken after upgrade (BFD + BGP)

Dec 3, 2024
15
2
3
After upgrading to 8.4.1, BFD and BGP seem to stop working for some reason or other. BFD constantly says the link is DOWN, even though it should be up. One machine on each cluster is affected. Disabling BFD seems to solve the issue. It affects *all* connections on each affected machine. I don't understand why it's broken. We upgraded from FRR 8.X to FRR 10.X

Configuration:
router bgp 4280323229
bgp router-id 10.1.1.3
no bgp ebgp-requires-policy
no bgp default ipv4-unicast
bgp deterministic-med
bgp bestpath as-path multipath-relax
bgp bestpath compare-routerid
timers bgp 3 9
neighbor haqua_default peer-group
neighbor haqua_default remote-as external
neighbor haqua_default bfd
neighbor haqua_default capability extended-nexthop
neighbor sf_fw peer-group
neighbor sf_fw remote-as external
neighbor sf_fw bfd
neighbor sf_fw capability extended-nexthop
neighbor sl_fw peer-group
neighbor sl_fw remote-as external
neighbor sl_fw bfd
neighbor sl_fw capability extended-nexthop
neighbor underlay peer-group
neighbor underlay remote-as external
neighbor underlay capability extended-nexthop
neighbor vlan345 interface peer-group haqua_default
neighbor 2a13:2142:1:9::f1 peer-group sf_fw
neighbor 2a13:2142:1:9::f2 peer-group sf_fw
neighbor 2a13:2142:1:9::f3 peer-group sf_fw
neighbor vlan108 interface peer-group sl_fw
neighbor ens1f0np0 interface peer-group underlay
neighbor ens1f1np1 interface peer-group underlay
neighbor ens1f3np3 interface peer-group underlay
!
address-family ipv4 unicast
redistribute connected route-map loopback
neighbor sf_fw activate
neighbor sf_fw prefix-list default_only in
neighbor sf_fw prefix-list lo out
neighbor sl_fw activate
neighbor sl_fw prefix-list to_sl_fw_adv in
neighbor sl_fw prefix-list default_adv out
neighbor underlay activate
exit-address-family
!
address-family ipv6 unicast
redistribute connected
neighbor haqua_default activate
neighbor haqua_default prefix-list avernus_adv in
neighbor haqua_default prefix-list default_adv out
neighbor sf_fw activate
neighbor sf_fw prefix-list to_sf_fw_adv in
neighbor sf_fw prefix-list default_adv out
neighbor sl_fw activate
neighbor sl_fw prefix-list to_sl_fw_adv in
neighbor sl_fw prefix-list default_adv out
neighbor underlay activate
exit-address-family
!
address-family l2vpn evpn
neighbor underlay activate
advertise-all-vni
vni 261
rd 1103:261
exit-vni
advertise-svi-ip
exit-address-family
exit
!

Underlay is what's affected.

A short log thing:

```
2025-04-14T19:09:48.882141+02:00 shiki3 watchfrr[23394]: [QDG3Y-BY5TN] zebra state -> up : connect succeeded
2025-04-14T19:09:48.882194+02:00 shiki3 watchfrr[23394]: [QDG3Y-BY5TN] mgmtd state -> up : connect succeeded
2025-04-14T19:09:48.882242+02:00 shiki3 watchfrr[23394]: [QDG3Y-BY5TN] bgpd state -> up : connect succeeded
2025-04-14T19:09:48.882269+02:00 shiki3 watchfrr[23394]: [QDG3Y-BY5TN] staticd state -> up : connect succeeded
2025-04-14T19:09:48.882292+02:00 shiki3 watchfrr[23394]: [QDG3Y-BY5TN] bfdd state -> up : connect succeeded
2025-04-14T19:09:48.882318+02:00 shiki3 watchfrr[23394]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
2025-04-14T19:09:49.606852+02:00 shiki3 zebra[23406]: [V98V0-MTWPF] client 51 says hello and bids fair to announce only bgp routes vrf=0
2025-04-14T19:09:52.994648+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:09:52.994817+02:00 shiki3 bgpd[23413]: [H4B4J-DCW2R][EC 33554455] ens1f3np3 [Error] bgp_read_packet error: Connection reset by peer
2025-04-14T19:09:55.183662+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:09:56.221385+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:09.451370+02:00 shiki3 zebra[23406]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.455245+02:00 shiki3 watchfrr[23394]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.456423+02:00 shiki3 bfdd[23423]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.505545+02:00 shiki3 mgmtd[23411]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.778039+02:00 shiki3 bgpd[23413]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.852405+02:00 shiki3 zebra[23406]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.856607+02:00 shiki3 watchfrr[23394]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.857741+02:00 shiki3 bfdd[23423]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:09.904466+02:00 shiki3 mgmtd[23411]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:10.021029+02:00 shiki3 bgpd[23413]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
2025-04-14T19:10:10.081139+02:00 shiki3 watchfrr[23394]: [WFP93-1D146] configuration write completed with exit code 0
2025-04-14T19:10:11.579500+02:00 shiki3 watchfrr[23394]: [WFP93-1D146] configuration write completed with exit code 0
2025-04-14T19:10:14.024161+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:14.024278+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:14.024309+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:14.024335+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:14.024360+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:14.024385+02:00 shiki3 bgpd[23413]: [TXY0T-CYY6F][EC 100663299] Can't get remote address and port: Transport endpoint is not connected
2025-04-14T19:10:15.064907+02:00 shiki3 watchfrr[23394]: [NG1AJ-FP2TQ] Terminating on signal
2025-04-14T19:10:15.174976+02:00 shiki3 zebra[23406]: [N5M5Y-J5BPG][EC 4043309121] Client 'bfd' (session id 0) encountered an error and is shutting down.
2025-04-14T19:10:15.175152+02:00 shiki3 bgpd[23413]: [ZW1GY-R46JE] Terminating on signal
2025-04-14T19:10:15.175456+02:00 shiki3 mgmtd[23411]: [X3G8F-PM93W] BE-adapter: mgmt_msg_read: got EOF/disconnect
2025-04-14T19:10:15.175500+02:00 shiki3 zebra[23406]: [JPSA8-5KYEA] client 44 disconnected 0 bfd routes removed from the rib
2025-04-14T19:10:15.175525+02:00 shiki3 zebra[23406]: [S929C-NZR3N] client 44 disconnected 0 bfd nhgs removed from the rib
2025-04-14T19:10:15.175549+02:00 shiki3 mgmtd[23411]: [J2RAS-MZ95C] Terminating on signal
2025-04-14T19:10:15.175604+02:00 shiki3 zebra[23406]: [N5M5Y-J5BPG][EC 4043309121] Client 'static' (session id 0) encountered an error and is shutting down.
2025-04-14T19:10:15.175644+02:00 shiki3 zebra[23406]: [X3G8F-PM93W] BE-client: mgmt_msg_read: got EOF/disconnect
2025-04-14T19:10:15.175674+02:00 shiki3 zebra[23406]: [XVBTQ-5QTVQ] Terminating on signal
2025-04-14T19:10:15.176151+02:00 shiki3 zebra[23406]: [JPSA8-5KYEA] client 18 disconnected 58 bgp routes removed from the rib
2025-04-14T19:10:15.176204+02:00 shiki3 zebra[23406]: [S929C-NZR3N] client 18 disconnected 0 bgp nhgs removed from the rib
2025-04-14T19:10:15.176267+02:00 shiki3 bgpd[23413]: [YAF85-253AP][EC 100663299] buffer_write: write error on fd 15: Broken pipe
2025-04-14T19:10:15.176298+02:00 shiki3 bgpd[23413]: [X6B3Y-6W42R][EC 100663302] zclient_send_message: buffer_write failed to zclient fd 15, closing
2025-04-14T19:10:15.176335+02:00 shiki3 zebra[23406]: [JPSA8-5KYEA] client 32 disconnected 0 vnc routes removed from the rib
2025-04-14T19:10:15.176365+02:00 shiki3 zebra[23406]: [S929C-NZR3N] client 32 disconnected 0 vnc nhgs removed from the rib
2025-04-14T19:10:15.176411+02:00 shiki3 zebra[23406]: [JPSA8-5KYEA] client 39 disconnected 0 static routes removed fro```
 
Seems like there are issue with BFD being flakey in 10.2.1, I was able to reproduce this on my testcluster as well. What worked for me was resetting bfd as follows on the nodes where the errors occured:

Code:
$ vtysh
(vtysh) conf t
(vtysh) router bgp <asn>

# for each neighbor using bfd:
(vtysh) no neighbor <name> bfd
(vtysh) neighbor <name> bfd

This seems to be fixed in 10.2.2, we'll see if we can get this version on testing soon.
 
Hello did issue resloved 10.2.2 version ,also having frr full mesh connect isssue on 8.4.1?
 
Hello,
Routed Setup (with Fallback) according https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server.
I found reason of issue,after frr upgrade 10.2.2 its overwritting /etc/frr/deamons fabricd=yes to fabricd=no.
After reverting on all nodes /etc/frr/deamons fabricd=yes issue resolved.

Makes sense, since we ship a custom version of that file. When updating FRR, apt should ask you whether you want to keep your custom configuration or if you want to overwrite it with the package's version. In that case make sure that you select the option where you keep your custom file. That should avoid the problem in the future.
 
I ran into this same issue and was also able to fix it by changing /etc/frr/deamons to fabricd=yes
But it took quite a bit of time to identify the root cause

I am using pveupdate / pveupgrade to install updates
I checked the logs and the daemons file was modified by pveupgrade
I don't remember this for sure, but I assume pveupgrade automatically replaced the file without asking

Suggestion : When a user triggers the SDN apply process then Proxmox must also check /etc/frr/daemons file if it is compatible with the settings
For example fabricd=yes for BGP setups and update the daemons file as required


root@pve1:~# zgrep frr /var/log/apt/term.log*
/var/log/apt/term.log.1.gz:Preparing to unpack .../frr-pythontools_10.2.1-1+pve2_all.deb ...
/var/log/apt/term.log.1.gz:Unpacking frr-pythontools (10.2.1-1+pve2) over (8.5.2-1+pve1) ...
/var/log/apt/term.log.1.gz:Preparing to unpack .../frr_10.2.1-1+pve2_amd64.deb ...
/var/log/apt/term.log.1.gz:Unpacking frr (10.2.1-1+pve2) over (8.5.2-1+pve1) ...
/var/log/apt/term.log.1.gz:Setting up frr (10.2.1-1+pve2) ...
/var/log/apt/term.log.1.gz:Installing new version of config file /etc/frr/daemons ...

/var/log/apt/term.log.1.gz:Installing new version of config file /etc/frr/support_bundle_commands.conf ...
/var/log/apt/term.log.1.gz:Installing new version of config file /etc/iproute2/rt_protos.d/frr.conf ...
/var/log/apt/term.log.1.gz:Installing new version of config file /etc/logrotate.d/frr ...
/var/log/apt/term.log.1.gz:Installing new version of config file /etc/pam.d/frr ...
/var/log/apt/term.log.1.gz:Installing new version of config file /etc/rsyslog.d/45-frr.conf ...
/var/log/apt/term.log.1.gz:addgroup: The group `frrvty' already exists as a system group. Exiting.
/var/log/apt/term.log.1.gz:addgroup: The group `frr' already exists as a system group. Exiting.
/var/log/apt/term.log.1.gz:The system user `frr' already exists. Exiting.
/var/log/apt/term.log.1.gz:Setting up frr-pythontools (10.2.1-1+pve2) ...
/var/log/apt/term.log.1.gz:Preparing to unpack .../0-frr-pythontools_10.2.2-1+pve1_all.deb ...
/var/log/apt/term.log.1.gz:Unpacking frr-pythontools (10.2.2-1+pve1) over (10.2.1-1+pve2) ...
/var/log/apt/term.log.1.gz:Preparing to unpack .../1-frr_10.2.2-1+pve1_amd64.deb ...
/var/log/apt/term.log.1.gz:Unpacking frr (10.2.2-1+pve1) over (10.2.1-1+pve2) ...
/var/log/apt/term.log.1.gz:Setting up frr (10.2.2-1+pve1) ...
/var/log/apt/term.log.1.gz:addgroup: The group `frrvty' already exists as a system group. Exiting.
/var/log/apt/term.log.1.gz:addgroup: The group `frr' already exists as a system group. Exiting.
/var/log/apt/term.log.1.gz:The system user `frr' already exists. Exiting.
/var/log/apt/term.log.1.gz:Setting up frr-pythontools (10.2.2-1+pve1) ...
 
Did you run the upgrade with -y by chance? Usually apt should ask you if there are changes in the configuration files which version you want to keep. I'll check if there are maybe any issues with the frr package on upgrading.
 
Hi

I am not running apt directly, I usually just run pveupdate / pveupgrade and enter Y to update the new packages

I just checked the following
  1. Installed PVE 8.3 from ISO image
  2. pveupdate
  3. Installed frr-8.5.1
  4. daemons.conf has fabricd=no
  5. pveupgrade to PVE 8.4 with all updates
  6. I haven't added -y myself, not sure if pveupgrade does that?
  7. grep frr /var/log/term.log shows Installing new version of config file /etc/frr/daemons
Is it correct that frr-8.5.1 has fabricd=no?

Is the above expected / have I misunderstood how pveupgrade works?
Or would it make sense to add some kind of check before it replaces the daemons file / config files in general?
 
Is it correct that frr-8.5.1 has fabricd=no?

Yes, only bfdd and bgpd are set to yes in the file we ship. I'll check this more thoroughly in the coming week and try to reproduce it and see if we can improve the behavior if it really doesn't prompt.
 
  • Like
Reactions: unterkomplex
Gave this a quick test and upgraded from my 8.3 cluster running frr 8.5 to frr 10.2.2 via pveupgrade (which is what the UI uses for updating as well). It prompted me because of my modified daemons file, so this seems to be working properly for me. Could you check your term log without grepping to see if this really didn't occur?

Code:
Setting up frr (10.2.2-1+pve1) ...

Configuration file '/etc/frr/daemons'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** daemons (Y/I/N/O/D/Z) [default=N] ?

I haven't added -y myself, not sure if pveupgrade does that?
No, it doesn't - it's a very thin wrapper around dist-upgrade, see [1]


[1] https://git.proxmox.com/?p=pve-mana...3f7ac3f95db077fc334920251d7879ae7bcec;hb=HEAD
 
Hi,

root@pve2:~# ll /var/log/apt/
total 110
-rw-r--r-- 1 root root 26840 May 30 08:54 eipp.log.xz
-rw-r--r-- 1 root root 0 Jun 1 00:00 history.log
-rw-r--r-- 1 root root 896 May 30 08:56 history.log.1.gz
-rw-r--r-- 1 root root 1282 Apr 30 08:07 history.log.2.gz
-rw-r--r-- 1 root root 957 Mar 28 09:48 history.log.3.gz
-rw-r--r-- 1 root root 734 Feb 28 10:17 history.log.4.gz
-rw-r--r-- 1 root root 1107 Jan 30 09:21 history.log.5.gz
-rw-r--r-- 1 root root 486 Dec 21 14:05 history.log.6.gz
-rw-r--r-- 1 root root 1411 Nov 28 2024 history.log.7.gz
-rw-r--r-- 1 root root 1788 Oct 25 2024 history.log.8.gz
-rw-r----- 1 root adm 0 Jun 1 00:00 term.log
-rw-r----- 1 root adm 2729 May 30 08:56 term.log.1.gz
-rw-r----- 1 root adm 3580 Apr 30 08:07 term.log.2.gz
-rw-r----- 1 root adm 3019 Mar 28 09:48 term.log.3.gz
-rw-r----- 1 root adm 2102 Feb 28 10:17 term.log.4.gz
-rw-r----- 1 root adm 3453 Jan 30 09:21 term.log.5.gz
-rw-r----- 1 root adm 1465 Dec 21 14:05 term.log.6.gz
-rw-r----- 1 root adm 4135 Nov 28 2024 term.log.7.gz
-rw-r----- 1 root adm 5027 Oct 25 2024 term.log.8.gz

I ran
zgrep frr /var/log/apt/term.log*
and can see that there are only entries in /var/log/apt/term.log.2.gz and in /var/log/apt/term.log.8.gz


Looking at the file with the logs from April 2025
Full log is attached as file

Log started: 2025-04-01 22:09:53
(Reading database ... 61984 files and directories currently installed.)
Preparing to unpack .../frr-pythontools_10.2.1-1+pve2_all.deb ...
Unpacking frr-pythontools (10.2.1-1+pve2) over (8.5.2-1+pve1) ...
Preparing to unpack .../frr_10.2.1-1+pve2_amd64.deb ...
Unpacking frr (10.2.1-1+pve2) over (8.5.2-1+pve1) ...
Setting up frr (10.2.1-1+pve2) ...
Installing new version of config file /etc/frr/daemons ...
Installing new version of config file /etc/frr/support_bundle_commands.conf ...
Installing new version of config file /etc/iproute2/rt_protos.d/frr.conf ...
Installing new version of config file /etc/logrotate.d/frr ...
Installing new version of config file /etc/pam.d/frr ...
Installing new version of config file /etc/rsyslog.d/45-frr.conf ...
addgroup: The group `frrvty' already exists as a system group. Exiting.
addgroup: The group `frr' already exists as a system group. Exiting.
adduser: Warning: The home dir /nonexistent you specified can't be accessed: No such file or directory
The system user `frr' already exists. Exiting.
Setting up frr-pythontools (10.2.1-1+pve2) ...
 

Attachments

I just checked the following
  1. Installed PVE 8.3 from ISO image
  2. pveupdate
  3. Installed frr-8.5.1
  4. daemons.conf has fabricd=no
  5. pveupgrade to PVE 8.4 with all updates
  6. I haven't added -y myself, not sure if pveupgrade does that?
  7. grep frr /var/log/term.log shows Installing new version of config file /etc/frr/daemons
Is it correct that frr-8.5.1 has fabricd=no?

It seems like you never set fabricd=yes during the process? So then it would make sense that it never prompted you, since the file was unchanged. On my cluster I couldn't reproduce this behavior, I got prompted by apt when I enabled fabricd and then upgraded from 8.5 to 10.2

Suggestion : When a user triggers the SDN apply process then Proxmox must also check /etc/frr/daemons file if it is compatible with the settings
For example fabricd=yes for BGP setups and update the daemons file as required

Maybe that is where the confusion comes from? SDN apply doesn't change the /etc/frr/daemons file (for now) and it won't parse any custom configuration you add to /etc/frr/frr.conf.local . If you add a custom FRR config, then you also need to make sure to enable the respective daemons, same as you would with normal FRR.
 
Hi,

Yes maybe one or both of us got confused. Let me go back a couple of steps :

  1. I have configured EVPN via the Proxmox SDN GUI on frr 8.x last year (see below for config in case it helps)
  2. After upgrading to frr 10.x I noticed that the BGP no longer works after reboot
  3. I am able to manually fix it by switching off BFD, waiting until the BGP connection is up, then switching on BFD again
  4. I am looking for a way to fix this properly
  5. I have not made any manual changes to frr.conf
  6. My understanding from the above was that for BGP / BFD / EVPN to work I need fabricd=yes in /etc/frr/daemons
  7. I manually switched on fabricd=yes (and suspected that we found a bug that I had to manually switch it on, even though I only configured via Proxmox SDN)

Maybe to ask another way : Do I even need fabricd=yes for the below EVPN config to work?


root@pve1:~# cat /etc/pve/sdn/*.cfg
evpn: evpnctl
asn 65000
peers [...]

subnet: dhcp1-10.0.8.0-22
vnet dhcp1
dhcp-range start-address=10.0.8.10,end-address=10.0.11.255
gateway 10.0.8.1
snat 1

subnet: evpn1-10.0.2.0-24
vnet isolatd1
gateway 10.0.2.1
snat 1

subnet: evpn1-10.0.1.0-24
vnet public1
gateway 10.0.1.1
snat 1

vnet: dhcp1
zone dhcp1

vnet: public1
zone evpn1
tag 10100

vnet: isolatd1
zone evpn1
tag 10101

evpn: evpn1
controller evpnctl
vrf-vxlan 10000
exitnodes pve1,pve2,pve3
exitnodes-primary pve3
ipam pve
mac [...]

simple: dhcp1
dhcp dnsmasq
ipam pve
 
I am looking for a way to fix this properly
Does this still occur with the newest frr-10.2.2-1+pve1 package? Judging from your logs you are running 10.2.1-1+pve1. 10.2.2 would contain the proper fix for this issue.

Maybe to ask another way : Do I even need fabricd=yes for the below EVPN config to work?
No, fabricd is for running the daemon that manages Openfabric connections, EVPN only uses bgpd (for BGP) and bfdd (for BFD), there is no Openfabric involved. The user above was running into a different issue, hence why enabling fabricd worked.
 
Hi

I did install the latest 10.2.2 update a couple weeks after 10.2.1
I also saw that it was supposed to fix this issue, but somehow that did not work for me

Setting up frr (10.2.2-1+pve1) ...
addgroup: The group `frrvty' already exists as a system group. Exiting.
addgroup: The group `frr' already exists as a system group. Exiting.
adduser: Warning: The home dir /nonexistent you specified can't be accessed: No such file or directory
The system user `frr' already exists. Exiting.
Setting up frr-pythontools (10.2.2-1+pve1) ...
Processing triggers for man-db (2.11.2-2) ...
Log ended: 2025-04-24 00:10:53

Understood on fabricd, then that was not the reason for this not working


Not sure it that means anything but my /etc/frr/frr.conf starts with frr version 8.5.2
It was updated when I last applied the SDN config from the Proxmox GUI
 
Last edited:
I also still have this problem, though not as often between the proxmox nodes themselves. Specifically, I have it pretty much any time my (local) PCT container tries to connect to the proxmox host.

Basically my situation now is:

Proxmox node VLAN 145: Specific VLAN only for PCT container 145.
They connect using BGP unnumbered (VLAN145 on vmbr0 on the proxmox node, eth1 on the container itself).
Proxmox FRR version is frr/stable,now 10.2.2-1+pve1
Container FRR version is 8.4.4 (it's a standard debian bookworm PCT machine)
If I have BFD on both machines active, then it doesn't work until I disable BFD on the container and then re-enable it. It seems to affect all of my machines now.
 
Last edited: