Virtio NICs are bascially limited by the performance of your CPU. So I guess your CPU is the bottleneck there. You might need to passthrough the NIC (or a function when using SR-IOV) to the VMs for better performance. So the VMs can directly access the physical hardware without the virtualization slowing it down in between.
My two Ryzen 9 5950X, inside the VM, according to
htop
wasn't using more than 25% CPU on any particular core.
I'm limited to 23.4-ish Gbps going VM-to-VM through a Linux Network Bridge (which has one of the two 100 Gbps ports from my Mellanox ConnectX-4 card, set to ETH link type), and connected to each other via a DAC. (that's with 1 stream. 4 parallel streams, that drops down to 23.1 Gbps, and with 8 parallel streams, that drops that to 23.0 Gbps.)
Host to host, I can hit 96.9 Gbps, also tested using
iperf
.

re: virtual functions
So, as I've mentioned, I've enabled SR-IOV (at least on the IB side of things).
The problem that I am running into now is that I can't set the 20-byte IB MAC address for the virtual functions, because when I tried to do that via, the error message that I get is:
Code:
# ip link set ibp8s0f0 vf 0 mac 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:22:33:44:55:66
RTNETLINK answers: Operation not supported
So, I haven't been able to really "use" said virtual functions with a VM, because it inherited the MAC address from port0 on the physical function.
The instructions that I have found are either for Red Hat based linux distros and/or that it is the official documentation from Mellanox's OFED drivers, which as I've been googling this, trying to find an answer, someone else had said that they kept "breaking" their system and that Mellanox's OFED driver apparently doesn't play nicely with Debian and/or Debian-based linux distros.
(This is why I ended up setting one of the ports on each node/NIC to be the ETH link type, so that I can create a Linux (ethernet) network bridge, for use between VMs and/or CTs, inside of Proxmox.)