Hello all,
I'm having a strange networking bottleneck with PVE 8.1.4 with virtualized OPNsense 24.1.4.
The hardware is a Dell R710 with dual 6-core Xeon X5690 and 128 GB RAM. The onboard NIC is whatever Intel GbE is on the motherboard, and the PCIe 10G NIC is a Qlogic cLOM8214 with FS 10GE Ethernet optics installed in both slots, one of which has no cable plugged into it and the other goes straight to the ISP's box via a 10G connection.
All VMs are using VirtIO.
The ISP says the connection should be good for 8 Gbit/s.
A single-CPU, no-multiqueue Debian 12 live image can iperf3 to PVE over vmbr0.10 at ~13 Gbit/s.
vmbr999 exists solely to pass through enp4s0f1 (WAN) to OPNsense. OPNsense also sits on vmbr0.10 at 10.xxx.xxx.1/24. The other VLANs are irrelevant for this issue.
Using iperf3 gives very interesting results (always TCP, same results for both single stream and 10 streams):
Thanks in advance!
I'm having a strange networking bottleneck with PVE 8.1.4 with virtualized OPNsense 24.1.4.
The hardware is a Dell R710 with dual 6-core Xeon X5690 and 128 GB RAM. The onboard NIC is whatever Intel GbE is on the motherboard, and the PCIe 10G NIC is a Qlogic cLOM8214 with FS 10GE Ethernet optics installed in both slots, one of which has no cable plugged into it and the other goes straight to the ISP's box via a 10G connection.
All VMs are using VirtIO.
The ISP says the connection should be good for 8 Gbit/s.
A single-CPU, no-multiqueue Debian 12 live image can iperf3 to PVE over vmbr0.10 at ~13 Gbit/s.
vmbr999 exists solely to pass through enp4s0f1 (WAN) to OPNsense. OPNsense also sits on vmbr0.10 at 10.xxx.xxx.1/24. The other VLANs are irrelevant for this issue.
Using iperf3 gives very interesting results (always TCP, same results for both single stream and 10 streams):
- From PVE CLI to external host at another ISP: 7.5 Gbit/s up, 6.95 Gbit/s down (through the OPNsense VM!)
- Between OPNsense and the same external host: 2.06 Gbit/s up, 1.26 Gbit/s down
- Between any two Linux VMs on the same PVE host and same VLAN (vmbr0.10, 10.xxx.xxx.xxx/24) other than OPNsense: 11-13 Gbit/s
- Between OPNsense VM and PVE: 1.7 Gbit/s up to PVE, 1.45 Gbit/s down from PVE
- Between various Linux VMs on vmbr0.10 to the same external host: 1.77 Gbit/s up, 1.79 Gbit/s down
- Throwing more hardware at any of the VMs does not help.
- Changing CPU types and/or flags does not help.
- Playing with queues on any VM does not help; no-multiqueue on all VMs gives the same numbers, including PVE-to-internet at 7.5 Gbit/s.
- Playing with tunables in OPNsense does not help (and PVE can get to the internet at full blast anyway so the issue does not seem to be within the OPNsense VM).
Thanks in advance!
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 18.2.0-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
dnsmasq: 2.89-1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2
Code:
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0
cores: 4
cpu: host,flags=+pcid;+spec-ctrl;+ssbd;+pdpe1gb;+aes
machine: q35
memory: 16384
meta: creation-qemu=8.1.2,ctime=1710625688
name: OPNsense
net0: virtio=BC:24:11:FC:AE:A1,bridge=vmbr999,queues=4
net1: virtio=BC:24:11:CD:98:1D,bridge=vmbr0,queues=4,tag=10
net2: virtio=BC:24:11:1B:0A:2D,bridge=vmbr0,queues=4,tag=20
net3: virtio=BC:24:11:8A:C6:42,bridge=vmbr0,queues=4,tag=99
numa: 0
onboot: 1
ostype: other
protection: 1
scsi0: local-zfs:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=[snip]
sockets: 1
startup: order=1
vga: qxl
vmgenid: [snip]
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
#Trunk to Cisco Switch
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface enp4s0f0 inet manual
#PVE-PBS 10G, not yet in service
iface enp4s0f1 inet manual
#10G SFP+ to ISP box
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#Bridge TRUNK Proxmox-Switch
auto vmbr0.10
iface vmbr0.10 inet static
address 10.xxx.xxx.yyy/24
gateway 10.xxx.xxx.1
#Internal VLAN
auto vmbr0.20
iface vmbr0.20 inet manual
#Guest VLAN
auto vmbr0.99
iface vmbr0.99 inet static
address 172.xxx.xxx.99/24
#Management VLAN
auto vmbr999
iface vmbr999 inet manual
bridge-ports enp4s0f1
bridge-stp off
bridge-fd 0
#Bridge OPNsense - ISP box
source /etc/network/interfaces.d/*
Last edited: