About spanning-tree, you should really disable it on your physical switch port of for your proxmox nodes. a spanning tree convergence can happen on host reboot and broke the whole cluster for some second.
you don't need change knet_mtu, it's...
Strange that you also have high memory pressure "PSI some memory". do you have enable numa option on the vm ?
you can also look at host numa stat
# apt install numactl
# numstat
and look if you don't have a lot of "numa_miss" vs "numa_hit"...
can you send corosync log of each node ? (journalctl -u corosync).
is the nic for corosync link dedicated ? or do you have vm,storage,backup,...running on it too ? (no bandwidth saturation ? )
no spanning tree on the network ? do you use...
firewall=1 is create a linux bridge fwbr interface, so it's not usable here. Implementing the whole firewall code in vpp is another thing. (and currently proxmox code don't have any easy plugin code to implement different firewall)
So, why do you want to use pve-firewall ? (I mean, disable pve-firewall service or uncheck firewall checkbox on the vms)
I'll try to look at proxmox-firewall code, but it shouldn't be needed
As far as I understand the architecture it is for one part the single packet processing for an entire pipeline vs. handling batches of packets (aka vectors) at each step of the pipeline. The latter one saves time when loading the cpu instructions...
maybe the best way is to ask to the dev mailing list pve-devel@lists.proxmox.com. (I'm pretty sure that some users could be interested for routers vm appliance)
the basic dev doc for patch submission is here...
This is needed if you want to use iptables (used by pve-firewall) to have ip rules at bridge level.
why do you want to disable them ?
Alternatively, they shouldn't be needed by the new nftables based firewall (proxmox-firewall services), as...
mmm, this seem to be a change in debian13:
https://www.debian.org/releases/trixie/release-notes/issues.html#etc-sysctl-conf-is-no-longer-honored
In Debian 13, systemd-sysctl no longer reads <span>/etc/sysctl.conf</span>. The package...
say thanks you to systemd. the nic naming is based of pci slot ordering. sometimes, when adding a pcie devices (or nvme drive), the internal order can change. (depend of the motherboard).
pve9 have a new feature to add a statc name "nicX" based...
do you use bonding on your proxmox node ? if yes, which mode ?
dropped traffic could be multicast or unicast flood with destination ip is not the ip of our vm. (check also if mac address ageing timeout not too low on your physical switch)
I known that ext4 had problem with discard in the past (not about fragmentation, but discard not always working).
Personally, I'm using xfs in production, and I never had this problem (on 4000 vms)
(small reminder: don't use zfs on consumer ssd/nvme . they can't handle a lot of fsync because they don't have a PLP/powercapacitor), and zfs do a lot of sync. It's really like 200~1000 iops max with this kind of drive.
this is normal, don't use vmxnet3 or e1000, they are full software emulation. you need to use virtio which use vhost-net offloading on pvehost.
your cpu is quite old, and it's possible that spectre/meltdown/.... mitigation impact performance...
250566pps is quite low, I mean , you should reach 1~2mpps for any each packetsize. I remember to reach easily 7~9gbit with 1core/thread with standard 1500mtu. (with epyc v3 3,5ghz and cpu forced to max frequency)