Hi. I'm using proxmox Virtual Environment 8.3.3. I migrated my hardware but still have old and new environment. Old one had just one NIC and that's one of the reasons why I'm moving to new hardware with two NICs. I run pfsense and other VMs/containers.
From both the old proxmox and a windows PC if I run a simple iperf to the new proxmox I get around 200Mbits on a 1Gbps connection. From the same Windows laptop to the old proxmox I get ~900Mpbs. I see around 200-300 RETR and a very small CWND.
If I test on the reverse direction it works as expected (~900Mbps).
After a lot of digging I was able to test on the old proxmox that both setting net.ipv4.tcp_congestion_control=bbr or using iperf3 -C option solves the issue. Note that I'm changing this on the "client" side. But I can't find a way (that works) to set it on the Windows laptop. And it's puzzling me that the change on the server side requires a change on the client side. I couldn't find nay meaningful difference on proxmox TCP settings or interface configurations. Both use a linux bridge with VLAN enabled as I have different VLANs on the "internal" NIC. Old system add an extra VLAN for the external (Internet) connection which now has it's own NIC.
I can paste whatever you think is useful, but I didn't want to clutter the initial post with what could be useless information. Please note that I'm not experienced with this level of network debugging....
Just to illustrate the issue, from old proxmox to new:
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 24.1 MBytes 202 Mbits/sec 240 38.2 KBytes
[ 5] 1.00-2.00 sec 25.8 MBytes 216 Mbits/sec 237 24.0 KBytes
[ 5] 2.00-3.00 sec 24.1 MBytes 202 Mbits/sec 229 31.1 KBytes
[ 5] 3.00-4.00 sec 21.7 MBytes 182 Mbits/sec 202 31.1 KBytes
with a change in the congestion protocol:
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 112 MBytes 935 Mbits/sec 47 311 KBytes
[ 5] 1.00-2.00 sec 109 MBytes 914 Mbits/sec 12 311 KBytes
[ 5] 2.00-3.00 sec 110 MBytes 923 Mbits/sec 0 308 KBytes
[ 5] 3.00-4.00 sec 109 MBytes 913 Mbits/sec 0 314 KBytes
[ 5] 4.00-5.00 sec 110 MBytes 924 Mbits/sec 20 314 KBytes
Would there be any "server" side configuration that explains this? That would be the preferred path, but if I can define the congestion protocol on the windows side I could live with that, as that laptop is my main/daily workstation....
Thanks in advance.
From both the old proxmox and a windows PC if I run a simple iperf to the new proxmox I get around 200Mbits on a 1Gbps connection. From the same Windows laptop to the old proxmox I get ~900Mpbs. I see around 200-300 RETR and a very small CWND.
If I test on the reverse direction it works as expected (~900Mbps).
After a lot of digging I was able to test on the old proxmox that both setting net.ipv4.tcp_congestion_control=bbr or using iperf3 -C option solves the issue. Note that I'm changing this on the "client" side. But I can't find a way (that works) to set it on the Windows laptop. And it's puzzling me that the change on the server side requires a change on the client side. I couldn't find nay meaningful difference on proxmox TCP settings or interface configurations. Both use a linux bridge with VLAN enabled as I have different VLANs on the "internal" NIC. Old system add an extra VLAN for the external (Internet) connection which now has it's own NIC.
I can paste whatever you think is useful, but I didn't want to clutter the initial post with what could be useless information. Please note that I'm not experienced with this level of network debugging....
Just to illustrate the issue, from old proxmox to new:
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 24.1 MBytes 202 Mbits/sec 240 38.2 KBytes
[ 5] 1.00-2.00 sec 25.8 MBytes 216 Mbits/sec 237 24.0 KBytes
[ 5] 2.00-3.00 sec 24.1 MBytes 202 Mbits/sec 229 31.1 KBytes
[ 5] 3.00-4.00 sec 21.7 MBytes 182 Mbits/sec 202 31.1 KBytes
with a change in the congestion protocol:
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 112 MBytes 935 Mbits/sec 47 311 KBytes
[ 5] 1.00-2.00 sec 109 MBytes 914 Mbits/sec 12 311 KBytes
[ 5] 2.00-3.00 sec 110 MBytes 923 Mbits/sec 0 308 KBytes
[ 5] 3.00-4.00 sec 109 MBytes 913 Mbits/sec 0 314 KBytes
[ 5] 4.00-5.00 sec 110 MBytes 924 Mbits/sec 20 314 KBytes
Would there be any "server" side configuration that explains this? That would be the preferred path, but if I can define the congestion protocol on the windows side I could live with that, as that laptop is my main/daily workstation....
Thanks in advance.