firewall=1 is create a linux bridge fwbr interface, so it's not usable here. Implementing the whole firewall code in vpp is another thing. (and currently proxmox code don't have any easy plugin code to implement different firewall)
So, why do you want to use pve-firewall ? (I mean, disable pve-firewall service or uncheck firewall checkbox on the vms)
I'll try to look at proxmox-firewall code, but it shouldn't be needed
As far as I understand the architecture it is for one part the single packet processing for an entire pipeline vs. handling batches of packets (aka vectors) at each step of the pipeline. The latter one saves time when loading the cpu instructions...
maybe the best way is to ask to the dev mailing list pve-devel@lists.proxmox.com. (I'm pretty sure that some users could be interested for routers vm appliance)
the basic dev doc for patch submission is here...
This is needed if you want to use iptables (used by pve-firewall) to have ip rules at bridge level.
why do you want to disable them ?
Alternatively, they shouldn't be needed by the new nftables based firewall (proxmox-firewall services), as...
mmm, this seem to be a change in debian13:
https://www.debian.org/releases/trixie/release-notes/issues.html#etc-sysctl-conf-is-no-longer-honored
In Debian 13, systemd-sysctl no longer reads <span>/etc/sysctl.conf</span>. The package...
say thanks you to systemd. the nic naming is based of pci slot ordering. sometimes, when adding a pcie devices (or nvme drive), the internal order can change. (depend of the motherboard).
pve9 have a new feature to add a statc name "nicX" based...
do you use bonding on your proxmox node ? if yes, which mode ?
dropped traffic could be multicast or unicast flood with destination ip is not the ip of our vm. (check also if mac address ageing timeout not too low on your physical switch)
I known that ext4 had problem with discard in the past (not about fragmentation, but discard not always working).
Personally, I'm using xfs in production, and I never had this problem (on 4000 vms)
(small reminder: don't use zfs on consumer ssd/nvme . they can't handle a lot of fsync because they don't have a PLP/powercapacitor), and zfs do a lot of sync. It's really like 200~1000 iops max with this kind of drive.
this is normal, don't use vmxnet3 or e1000, they are full software emulation. you need to use virtio which use vhost-net offloading on pvehost.
your cpu is quite old, and it's possible that spectre/meltdown/.... mitigation impact performance...
250566pps is quite low, I mean , you should reach 1~2mpps for any each packetsize. I remember to reach easily 7~9gbit with 1core/thread with standard 1500mtu. (with epyc v3 3,5ghz and cpu forced to max frequency)
as far I remember, virtio-net is limit is around 2millions pps by core (depend of the cpu frequency). The only way is to increase number of queue on the virtio nic.
(if you are cpu limited, you should see a vhost-net process at 100% on the pve...
I think it could be done with a dedicated interface in each zone/vrf, (not sure if a vlan tagged interface could work to avoid the need to have dedicated interfaces). That's why I'm doing it with my physical router/switch currently, with a lot...