pve-firewall blocks large UDP ipsec packets

Jarek

Well-Known Member
Dec 16, 2016
74
11
48
46
Hello,
Firewall is enabled in datacenter, but disabled on host and vm. Packets 5 and 6 (see attached image), same as 8 and 9, appears on tap vm interface on host, but pve-firewall drops those frames (does not appear on host uplink interface). Everything works fine when pve-firewall service is disabled.
VM config:
Code:
balloon: 0
boot: order=ide2;ide0;net0
cores: 2
cpu: pentium3
ide0: vm-ntt_vm:vm-713-disk-0,cache=writeback,size=10G
ide2: none,media=cdrom
memory: 1024
name: pliVPN
net0: e1000=C6:C3:34:XX:XX:XX,bridge=vmbr0,tag=16
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=74168682-d9fd-49bc-8ae8-xxxxxxxx
sockets: 1
vga: std
vmgenid: 9ea3206d-c00e-46e4-98aa-xxxxxxx

Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-4
pve-kernel-5.15: 7.3-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
 

Attachments

  • Przechwycenie obrazu ekranu_2023-03-10_16-35-42.png
    Przechwycenie obrazu ekranu_2023-03-10_16-35-42.png
    153 KB · Views: 18
Last edited:
maybe do you have an mtu && fragmentation problem ?
MTU set to 1500 on all interfaces (Node and VM). What kind of problem do you mean? Yes, packet is fragmented, but this is the proper way to send large packets over the network.
firewall drop fragmented packets when it's enable
Why? Which rule in the firewall drops these packets?
 
MTU set to 1500 on all interfaces (Node and VM). What kind of problem do you mean? Yes, packet is fragmented, but this is the proper way to send large packets over the network.
I mean, if you have 1500 on your node && vm, are sure to that you don't have lower mtu somewhere in your physical network, router. (or maybe a vpn or encapsulation tunnel).

can you try a "ping -m do -s 1470 .." between 2 hosts to be sure ?
Why? Which rule in the firewall drops these packets?
no rule. The conntrack of the firewall can't work with fragmented packet.
 
I mean, if you have 1500 on your node && vm, are sure to that you don't have lower mtu somewhere in your physical network, router. (or maybe a vpn or encapsulation tunnel).
Yes i'm sure because after 'service pve-firewall stop' everything works as expected.

can you try a "ping -m do -s 1470 .." between 2 hosts to be sure ?
Works.

no rule. The conntrack of the firewall can't work with fragmented packet.
I don't undestand this. There are many routers between VM and the other end of the ipsec tunnel, all of them forward these packets without problem.
I did some investigations. After
Code:
iptables-save > ipt
service pve-firewall stop
iptables-restore < ipt
still not good, frames are dropped. After
Code:
iptables -F INPUT; iptables -F FORWARD; iptables -F OUTPUT
- no change. I had to flush almost all PVE-* chains, even those without references and with counter = 0 to make this works.
Any suggestions?
 
your screenshot show "fragmented ip protocol" && reassembled.
This can't work with a firewall bridge like on proxmox.
the main
-A PVEFW-FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

for conntrack need unfragmented packet when working in bridge forward.


could it be possible than isakmp protocol send bigger packet than mtu ? (I known than ike can be configured for fragment size, but for isakmp, I really don't known)
 
your screenshot show "fragmented ip protocol" && reassembled.
This can't work with a firewall bridge like on proxmox.
the main
-A PVEFW-FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

for conntrack need unfragmented packet when working in bridge forward.
You're right.
could it be possible than isakmp protocol send bigger packet than mtu ? (I known than ike can be configured for fragment size, but for isakmp, I really don't known)
Yes, IKE can be configured with fragment=yes, but remote end of ipsec tunnel does not support it.
So there is no solution?
 
You're right.

Yes, IKE can be configured with fragment=yes, but remote end of ipsec tunnel does not support it.
So there is no solution?
I think that the only solution is to disable proxmox firewall (uncheck firewall checkbox on the vm interface), and do filtering inside the guest os.

(Personnaly,I'm running strongswan ipsec daemon in my vms (fragment=yes by default), I never had problem with different remote appliance (cisco ,netshield, forti,...). Not sure about the configuration of the remote side.
 
I think that the only solution is to disable proxmox firewall (uncheck firewall checkbox on the vm interface), and do filtering inside the guest os.
Firewall is disabled on the VM and host. The only way to make it works is to delete any netfilter rule (on host) that switch firewall into statefull mode. I guess there is a bug in the netfilter code which prevents fragmented packet to be reasembled when using a bridge (instead of routing).
(Personnaly,I'm running strongswan ipsec daemon in my vms (fragment=yes by default), I never had problem with different remote appliance (cisco ,netshield, forti,...). Not sure about the configuration of the remote side.
I know nothing about remote end, netiher hardware, software, config etc.
When i send packet with fragmentation=yes, remote end respond with 'invalid payload' message.
 
netfilter code which prevents fragmented packet to be reasembled when using a bridge (instead of routing).


https://www.spinics.net/lists/netdev/msg596072.html
"
When the "/proc/sys/net/bridge/bridge-nf-call-iptables" is on, bridge
will do defragment at PREROUTING and re-fragment at POSTROUTING. At
the re-fragment bridge will check if the max frag size is larger than
the bridge's MTU in br_nf_ip_fragment(), if it is true packets will
be dropped."

The only way to fix this, is to migrate the current proxmox firewall code to nftables instead iptables/ebtables
with iptables tables, the bridge firewall is bit tricky, forwarding iptraffic to iptables (instead be done in ebtables), and informations about original packet size is lost. (so can't refragment packet to the initial size).
nftables allow to manage this directly.

I don't known about the roadmap pour nftables integration, maybe proxmox8 , maybe later...
 
Having the same issue (vxlan over wireguard which causes fragmentation).

To clarify for others reading the thread:
This is still an issue in 8.0.4
Traffic to VMs will be impacted even though the firewall is disabled on their NICs.
Fix is to disable the proxmox firewall on either the node or the cluster.
 
Having the same issue (vxlan over wireguard which causes fragmentation).

To clarify for others reading the thread:
This is still an issue in 8.0.4
Traffic to VMs will be impacted even though the firewall is disabled on their NICs.
Fix is to disable the proxmox firewall on either the node or the cluster.
Also on 8.1.10 this issue still exists....
 
nftables is coming soon, and it should avoid extra fwbrX bridge, so it should work with fragmentation.

But try to avoid fragmentation please ...lower your mtu ..
It is not always that easy to lower the MTU when you have IPSEC Connections to other parties where you have no control over the external peer....
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!