iperf3 connection issues on Proxmox VE 8.2.2

timit60711

New Member
May 1, 2024
6
0
1
Hello!

A newbie here.
I had a Proxmox v 7.4 + iperf3 setup earlier, so I was able to test bandwidth between host, VMs and any other PCs with no issues.

I just got updated to v8.2.2 and now I'm having some issues when I'm trying to do some iperf3 test while Proxmox acting as iperf3 server.

Here some tests I've made:

1) Host (client) -> VM (WS2019, server) - connects instantly
2) Host (client) -> PC (W10, server) - connects instantly
3) VM (WS2019, client) -> PC (W10, server) - connects instantly
4) PC (W10, client) -> VM (WS2019, server) - connects instantly
5) VM (WS2019, client) -> Host (server) - "iperf3: error - unable to connect to server: Connection timed out"
6) PC (W10, client) -> Host (server) - "iperf3: error - control socket has closed unexpectedly"

I tried to disable firewall on both Datacenter and Node level, but it didn't help.

Then I tried to add some rules on both sides explicitly allowing both TCP and UDP at port 5201 (I tried different ports, too), even though firewall is still off. Still got these errors.

Then I checked iperf3 version on host (3.12 it is), and I got the same version on VM and PC, so there is no version mismatch. Still no luck.

Then in Datacenter -> Firewall -> Options I set "Input Policy" to "ACCEPT" (even though firewall is off everywhere already) and here is where I have weird behaviour... when I try to connect to host from VM as a client, I need to wait for ~10 seconds and after that it will be able to connect. Subsequest test within small time window connects instantly.
Yet I can't connect from PC to Host... Have no idea why. At v7.4 I had no such issues.

Where should I start looking? How to diagnose it?
 
For the records Proxmox is not you Operation System, it is Debian 12.5 Bookworm.

Post you IP Network config of your PVE host and the real config of all LXC and VM.
 
  • Like
Reactions: weehooey
For the records Proxmox is not you Operation System, it is Debian 12.5 Bookworm.

Post you IP Network config of your PVE host and the real config of all LXC and VM.
Hi!

Here you go:
nano /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual
        mtu 1500

auto eno2
iface eno2 inet manual
        mtu 1500

auto eno3
iface eno3 inet manual
        mtu 1500

auto eno4
iface eno4 inet manual
        mtu 1500

iface eno49 inet manual

iface eno50 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2 eno3 eno4
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 1500

auto vmbr0
iface vmbr0 inet static
        address 10.11.4.10/24
        gateway 10.11.4.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 1500
        hwaddress ether 94:15:22:6A:40:20
#Internet

auto vmbr1
iface vmbr1 inet static
        address 10.12.13.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        mtu 9000
        hwaddress ether 86:cc:44:5c:54:b3
#Between VMs

VM:
nano /etc/pve/qemu-server/221.conf
Code:
agent: 1,fstrim_cloned_disks=1
balloon: 16384
boot: order=scsi1
cores: 4
cpu: kvm64,flags=+pdpe1gb;+hv-tlbflush
hotplug: disk,network,usb,memory,cpu
ide1: none,media=cdrom
memory: 32768
name: WS19-STD
net0: virtio=26:FF:D2:B2:AA:2F,bridge=vmbr0,firewall=1
net1: virtio=AA:44:1C:11:92:52,bridge=vmbr1,firewall=1
numa: 1
onboot: 1
ostype: win10
scsi1: local-lvm:vm-221-disk-0,discard=on,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=0a1f5494-4429-4aa0-4471-749374567a23
sockets: 2
startup: order=5,up=60,down=90
tablet: 1
vmgenid: 82aab2a6-42e1-5a21-b417-ea1c164daab2
 
try without your bond, fast test with only 1 ethernet cable.
Done.

1) Fist I made changes in #nano /etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual
        mtu 1500

iface eno2 inet manual
        mtu 1500

iface eno3 inet manual
        mtu 1500

iface eno4 inet manual
        mtu 1500

iface eno49 inet manual

iface eno50 inet manual

iface bond0 inet manual
        bond-slaves none
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 1500

iface vmbr0 inet static
        address 10.11.4.10/24
        gateway 10.11.4.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
        hwaddress ether 94:15:22:6A:40:20
#Internet

auto vmbr1
iface vmbr1 inet static
        address 10.12.13.1/24
        bridge-ports none
        bridge-stp off
        bridge-fd 0
        mtu 9000
        hwaddress ether 86:cc:44:5c:54:b3
#Between VMs

2) Then I turned off bond on switch and turned off 3 other ethernet ports (eno2-4)

NB: I'm not able to physically detach cables (I'm in other city), so I did it on switch.

3) After that I disabled firewall for both Datacenter and Node and yet I have same issues

4) Then I set Input Policy to "ACCEPT" and tried again. After some time (~10-20 seconds) I started to get iperf3 speed statistics...

5) Lastly, I tried other interfaces (eno2/3/4) one by one. Same behaviour.

So, nothing changed. So, I assume that it isn't related to the bond... Any idea where should I look next?
 
I measured time since I start execution of iperf3 test and when it really begins. It's actually about 30-35 seconds. Maybe that can give a clue?
 
Just got updated to the latest version. Problem is still here. Any ideas where to start looking for?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!