I have a bit of a strange one. I have a pfSense VM setup on proxmox 8.1.3 with the following config:
agent: 1
balloon: 0
boot: order=scsi0;ide2
cores: 4
cpu: host
hostpci0: 0000:01:00.1,pcie=1 #passed through 10Gbe port from an Intel x540AT2 nic connected to 10Gbe ONT
ide2: local:iso/pfSense-CE-2.7.2-RELEASE-amd64.iso,media=cdrom,size=854172K
machine: q35
memory: 4096
meta: creation-qemu=8.1.2,ctime=1702956906
name: PFSENSE-2
net0: virtio=BC:24:11:EF:4B:41,bridge=vmbr0 #the other 10Gbe port on that same Intel x540AT2, connects all VMs and Cts on the host
numa: 0
onboot: 1
ostype: l26
scsi0: VMs:vm-107-disk-0,iothread=1,size=12G
scsihw: virtio-scsi-single
smbios1: uuid=7aa0d7e5-90b6-444c-98ec-2bcdab0a0e43
sockets: 1
startup: order=1,up=30
vmgenid: 331695e6-6024-4a0f-a672-cd39aac55e20
The following is disabled in the pfsense configuration:
Hardware Checksum Offloading
Hardware TCP Segmentation Offloading
Hardware Large Receive Offloading
Wireguard is configured in a standard way with MTU of 1420 and MSS of 1380.
pfSense version is CE 2.7.2
pfSense is connected to a 5Gbps x 5 Gbps XGS-PON service at a remote location.
My home connection is a 1Gbps x 35Mbps Cable connection.
When I connect to the remote pfSense VM with Wireguard and run an ookla speedtest I get around 500Mbps down and 35Mbps down.
When I try to iperf to a VM though...I get about 25Mbps.
So. When I connect through the Wireguard tunnel and speedtest to an ookla server, I get great performance. When I connect to the Wireguard tunnel and try to test bandwidth to a VM on the proxmox host, it's terribly slow.
That same VM I'm testing to remotely can get 5.5Gbps up and down itself to the closest ookla server.
I'm sort of scratching my head here and wondering if anyone might have any suggestions of what to try? I did try adjusting MTU and MSS values with no real change. It is very strange to me that testing through the tunnel to the outside world is very fast but test to the wireguard tunnel to VMs on the vmbr0 is really slow.
Let me know if there's any more pertinent information that might help diagnose.
Thanks!
agent: 1
balloon: 0
boot: order=scsi0;ide2
cores: 4
cpu: host
hostpci0: 0000:01:00.1,pcie=1 #passed through 10Gbe port from an Intel x540AT2 nic connected to 10Gbe ONT
ide2: local:iso/pfSense-CE-2.7.2-RELEASE-amd64.iso,media=cdrom,size=854172K
machine: q35
memory: 4096
meta: creation-qemu=8.1.2,ctime=1702956906
name: PFSENSE-2
net0: virtio=BC:24:11:EF:4B:41,bridge=vmbr0 #the other 10Gbe port on that same Intel x540AT2, connects all VMs and Cts on the host
numa: 0
onboot: 1
ostype: l26
scsi0: VMs:vm-107-disk-0,iothread=1,size=12G
scsihw: virtio-scsi-single
smbios1: uuid=7aa0d7e5-90b6-444c-98ec-2bcdab0a0e43
sockets: 1
startup: order=1,up=30
vmgenid: 331695e6-6024-4a0f-a672-cd39aac55e20
The following is disabled in the pfsense configuration:
Hardware Checksum Offloading
Hardware TCP Segmentation Offloading
Hardware Large Receive Offloading
Wireguard is configured in a standard way with MTU of 1420 and MSS of 1380.
pfSense version is CE 2.7.2
pfSense is connected to a 5Gbps x 5 Gbps XGS-PON service at a remote location.
My home connection is a 1Gbps x 35Mbps Cable connection.
When I connect to the remote pfSense VM with Wireguard and run an ookla speedtest I get around 500Mbps down and 35Mbps down.
When I try to iperf to a VM though...I get about 25Mbps.
So. When I connect through the Wireguard tunnel and speedtest to an ookla server, I get great performance. When I connect to the Wireguard tunnel and try to test bandwidth to a VM on the proxmox host, it's terribly slow.
That same VM I'm testing to remotely can get 5.5Gbps up and down itself to the closest ookla server.
I'm sort of scratching my head here and wondering if anyone might have any suggestions of what to try? I did try adjusting MTU and MSS values with no real change. It is very strange to me that testing through the tunnel to the outside world is very fast but test to the wireguard tunnel to VMs on the vmbr0 is really slow.
Let me know if there's any more pertinent information that might help diagnose.
Thanks!