Checkpoint/Fortinet firewall failover not working on internet interface

devzero

New Member
May 13, 2025
8
1
3
hi,

I am currently working on a project, with two proxmox hosts where on both are VMs where checkpoints and fortinet firewalls are hosted on.
Generally they are working fine, however if you shut down the switch port or disconnect the network cable on the interface where the internet connectivity is configured,
both firewall environment are not detecting that there a network disconnect has occured.
On the checkpoint VMs there is a DMZ network configured too, that switchover however worked, if the cable on the respective other interface is unplugged/or the switchport is disabled.

The Proxmox version is 8.4 and the network interfaces are configured with the openswitch and vlan mode is trunk with a certain range.
Support tickets to the FW vendors are already opened, however they are still investigating.

Does anyone has a similar setup and can give a hint where to check for the issue?
thanks
 
hi devzero,

that might become a tricky question. Would it be possible to provide us with the content of /etc/network/interfaces - file?

From your information so far, your setup at least seems to involve three network interfaces, from the perspective of the host:
- at least one physical NIC
- the VMBR above
- and the VMs actual interface.
So it is possible, that if the physical NIC gets disconnected, the VM won't realize, because the VM interface is still up.

If your hardware and your desired setup supports it, you can maybe pass through the physical ports via PCIe directly to your firewall VMs.

Maybe it is also helpful to elaborate a little bit on your planned setup? Do you try to archive HA on the Application (Firewall) Layer or maybe on the VM Layer by migrating the VM?

BR, Lucas
 
hi bl1mp,

thanks for the reply, the network config is quite complex:

Code:
auto lo
iface lo inet loopback

auto ens15f2np2
iface ens15f2np2 inet manual

auto ens15f3np3
iface ens15f3np3 inet manual

auto ens1f1np1
iface ens1f1np1 inet manual

auto ens2f0np0
iface ens2f0np0 inet manual

auto ens2f1np1
iface ens2f1np1 inet manual

auto ens3f0np0
iface ens3f0np0 inet manual

auto ens3f1np1
iface ens3f1np1 inet manual

auto ens15f0np0
iface ens15f0np0 inet manual

auto ens15f1np1
iface ens15f1np1 inet manual

auto ens1f0np0
iface ens1f0np0 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens15f0np0 ens1f0np0
        bond-miimon 100
        bond-mode 802.3ad
        bond_downdelay 200
        bond_updelay 200
        bond-lacp-rate 1
#Management

auto bond4
iface bond4 inet manual
        bond-slaves ens15f1np1 ens1f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond_downdelay 200
        bond_updelay 200
        bond-lacp-rate 1
#Cluster

auto bond1
iface bond1 inet manual
        ovs_bonds ens2f1np1 ens3f1np1
        ovs_type OVSBond
        ovs_bridge vmbr1
        ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast
#INTERNET-SYNC

auto bond2
iface bond2 inet manual
        ovs_bonds ens2f0np0 ens15f2np2
        ovs_type OVSBond
        ovs_bridge vmbr2
        ovs_options lacp=active bond_mode=balance-tcp other_config:lacp-time=fast
#DMZ

auto bond3
iface bond3 inet manual
        ovs_bonds ens3f0np0 ens15f3np3
        ovs_type OVSBond
        ovs_bridge vmbr3
        ovs_options lacp=active bond_mode=balance-tcp
#internal

auto bond4.1021
iface bond4.1021 inet static
        address 172.18.2xx.xx/27

auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        post-up /bin/echo 0 > /sys/class/net/vmbr0/bridge/vlan_filtering
#Management

auto vmbr0.1001
iface vmbr0.1001 inet static
        address 172.18.1xx.1xx/24
        gateway 172.18.1xx.1
        dns-nameservers 172.18.1xx.1x

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports bond1
#INTERNET-SYNC

auto vmbr2
iface vmbr2 inet manual
        ovs_type OVSBridge
        ovs_ports bond2
#DMZ

auto vmbr3
iface vmbr3 inet manual
        ovs_type OVSBridge
        ovs_ports bond3
#internal

source /etc/network/interfaces.d/*

There are more than one vm per server, so I can't pass trough the hardware NIC.
Is there an option to pass trough the disconnection to the VMs if the bond is going down?
The HA should be done by the firewall application via an internal detection process.

with kind regards,
devzero
 
Hi devzero,
the question is, is there more than one VM on the Uplink and kann you split the the ethernet card in a schema that works for you. By the reason you have interfaces enumerated in the way like
Code:
auto ens2f0np0
iface ens2f0np0 inet manual

auto ens2f1np1
iface ens2f1np1 inet manual
that could probably work. To check it out you can actually just try, how the PCI Devices are listed in the ethernet pass trough part in the VM configuration. This is also a little bit driver and kernelmodule dependend, so you can easily drive down a rabbit whole.

---
Checking and testing with configuration for ovs could be another one. I am not super familiar with that.

---
What should also work is monitoring /sys/class/net/<interface-name> probably via a systemd path unit (https://www.freedesktop.org/software/systemd/man/latest/systemd.path.html)

---
BR, Lucas
 
Hi bl1mp,
There are around 20 VMs on every server.
Additional to that on the server there are bonds configured, so it only fails, if both links of the same bond are going down.

There is a feature named SR-IOV however the question is if that works for 20 VMs and in every VM the bond needs to be set up separately and it is questionable if 20 bonds are supported on the connecting switch from the same switchport and if, how reliable that works.
 
Ah ok, I think I am getting closer and maybe my assumption was to fast.

So your are not planning to put the firewall in front of the other VMs, in a way that the firewalls are providing the uplink service for them? Or do you? In that case if only Firewalltraffic would go North, than passing through could be an option.
There is also this wiki article, if you do not already discovered that one, during my absence. https://pve.proxmox.com/wiki/PCI(e)_Passthrough#_general_requirements

If that is a possible path also depends on the hardware capabilities and the use case.

---
Of course you have some resilience by using bonds (most likely with stacked switches),
my assumption was the intention is to increase the availability, by creating the HA on the Firewall application level.
Otherwise, you could be able to spare the redundancy partner.

We use similar setups with our Vyos firewalls in front of the PVE-Cluster via VRRP. (And that works in testsetups as well with virtualized firewalls - Keep in mind to configure anti-affinity for the firewall guests)

From my perspective it is the best approach, if the firewall appliances, use a similiar approach, to monitor each other. (But in the past I didn't made the best experience with the Fortinetsupport)

---
And there is still the possibility to for example check /sys/class/net/<interface-name>/<speed|carrier>
to determ if a interface is down an than shutdown one guest, or ssh/serialconsole into one guest to do configuration.


I hope my elaboration helps, a little step further.

BR, Lucas
 
In that case, the VMs should be the firewalls for other services.
But yes thank you for the explanation, I will take a closer look at your advice.

with kind regards, Martin