[SOLVED] 10G ethernet config questions

bferrell

Well-Known Member
Nov 16, 2018
99
2
48
54
I have a couple of nodes (PVE5.2-1) that have 10G cards in them, and the switch indicates they're bonded at 10G speed, but the pve nodes show 1000bps connections. Do I need to install a driver or set a speed on the interface to get it utilize the port properly?

Also, once I get the node to link at 10G, will the virtio interface to the guests allow >1G speeds automatically? Thanks.

Brett


Code:
enp65s0f0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 00:0a:f7:58:53:30  txqueuelen 1000  (Ethernet)
        RX packets 194530523  bytes 257836045953 (240.1 GiB)
        RX errors 132100  dropped 168682  overruns 0  frame 132100
        TX packets 237154363  bytes 214077589016 (199.3 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 55  memory 0xd0000000-d07fffff

enp65s0f1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 00:0a:f7:58:53:30  txqueuelen 1000  (Ethernet)
        RX packets 331150735  bytes 335129190384 (312.1 GiB)
        RX errors 0  dropped 39732  overruns 0  frame 0
        TX packets 214809428  bytes 286941247082 (267.2 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 69  memory 0xd1000000-d17fffff
 

Attachments

  • 10G-bond.png
    10G-bond.png
    14 KB · Views: 367
...and ethtool see's it's capable of 10,000bps.. I added commas to make it easier to see... I'm not a linux guru, but shouldn't it auto-negotiate? Does this show that it negotiated 10G? Why does ifconfig show 1G, and the actual speeds are so low?

Code:
Settings for enp65s0f0:
        Supported ports: [ TP ]
        Supported link modes:   100baseT/Half 100baseT/Full
                                1000baseT/Full
                                10,000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Advertised link modes:  100baseT/Half 100baseT/Full
                                1000baseT/Full
                                10,000baseT/Full
        Advertised pause frame use: Symmetric Receive-only
        Advertised auto-negotiation: Yes
        Link partner advertised link modes:  1000baseT/Full
                                             10000baseT/Full
        Link partner advertised pause frame use: Symmetric
        Link partner advertised auto-negotiation: Yes
        Speed: 10000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 16
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: g
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: yes
 
Last edited:
... and the switch is fine, when I run iperf3 between my 10G NICs on my Macs, I get about 7Gbps.

on PVE host node 1 (to 5k iMac)
Code:
root@svr-01:~# iperf3 -c 192.168.10.32
Connecting to host 192.168.10.32, port 5201
[  4] local 192.168.100.11 port 42758 connected to 192.168.10.32 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  48.2 MBytes   405 Mbits/sec   47    277 KBytes
[  4]   1.00-2.00   sec  52.0 MBytes   436 Mbits/sec   44    287 KBytes
[  4]   2.00-3.00   sec  44.0 MBytes   369 Mbits/sec    9    320 KBytes
[  4]   3.00-4.00   sec  53.1 MBytes   445 Mbits/sec    3    324 KBytes
[  4]   4.00-5.00   sec  41.0 MBytes   344 Mbits/sec   21    269 KBytes
^Z

on mac (to 5k iMac)
Code:
Carolyn-MacBook-Air-7:~ admin$ iperf3 -c 192.168.10.32
Connecting to host 192.168.10.32, port 5201
[  5] local 192.168.10.107 port 57969 connected to 192.168.10.32 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   605 MBytes  5.08 Gbits/sec                 
[  5]   1.00-2.00   sec   615 MBytes  5.16 Gbits/sec                 
[  5]   2.00-3.00   sec   612 MBytes  5.13 Gbits/sec                 
[  5]   3.00-4.00   sec   611 MBytes  5.13 Gbits/sec                 
[  5]   4.00-5.00   sec   611 MBytes  5.12 Gbits/sec                 
[  5]   5.00-6.00   sec   611 MBytes  5.13 Gbits/sec
 
Last edited:
Chris - Then why is ifconfig report this?

enp65s0f0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 194530523 bytes 257836045953 (240.1 GiB)
RX errors 132100 dropped 168682 overruns 0 frame 132100
TX packets 237154363 bytes 214077589016 (199.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 55 memory 0xd0000000-d07fffff

My Mac properly reports it's connection in ifconfig
status: inactive
en11: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4>
ether 00:30:93:0c:1a:58
inet6 fe80::143a:ca35:f482:c463%en11 prefixlen 64 secured scopeid 0x6
inet 192.168.10.32 netmask 0xffffff00 broadcast 192.168.10.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect (10GbaseT <full-duplex,flow-control>)
status: active
 
Last edited:
Ok. Any recommendations on what to look at?

Both PVE nodes are dropping a lot of packets and not delivering 1G, even though they're directly connected to a 10G switch in the rack, and my Mac on the other side of the house has no issues doing 5-7Gbps... I haven't modified the network settings other than LAG the 2 10G ports. It's unlikely that both nodes have bad cables, and their very short runs, so I don't even have a theory on why these boxes are dropping packets.
 
I'm going to do some checking tonight, but after reading this, I think the dropped packet count is probably because of unexpected VLAN tags. I'd tried to setup my main interface on the 1G onbard card and the 10G to my FreeNas on a different subnet, and I suspect routing issues, but I don't want to mess with it too much remotely and get myself disconnected.

https://serverfault.com/questions/528290/ifconfig-eth0-rx-dropped-packets
Beginning with kernel 2.6.37, it has been changed the meaning of dropped packet count. Before, dropped packets was most likely due to an error. Now, the rx_dropped counter shows statistics for dropped frames because of:

  • Softnet backlog full
  • Bad / Unintended VLAN tags
  • Unknown / Unregistered protocols
  • IPv6 frames when the server is not configured for IPv6
 
OK, so I removed the extra subnet, and I have very few dropped packets (well less than 1%), but still miserble speeds. So, what should I look at next?

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@svr-01:~# iperf3 -c 192.168.10.32
Connecting to host 192.168.10.32, port 5201
[ 4] local 192.168.100.11 port 46226 connected to 192.168.10.32 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 47.5 MBytes 398 Mbits/sec 51 263 KBytes
[ 4] 1.00-2.00 sec 41.5 MBytes 348 Mbits/sec 7 280 KBytes
[ 4] 2.00-3.00 sec 51.9 MBytes 435 Mbits/sec 65 281 KBytes
[ 4] 3.00-4.00 sec 49.8 MBytes 418 Mbits/sec 45 283 KBytes
[ 4] 4.00-5.00 sec 43.3 MBytes 363 Mbits/sec 15 313 KBytes
[ 4] 5.00-6.00 sec 56.5 MBytes 474 Mbits/sec 24 310 KBytes
[ 4] 6.00-7.00 sec 46.5 MBytes 390 Mbits/sec 2 325 KBytes
[ 4] 7.00-8.00 sec 46.4 MBytes 389 Mbits/sec 17 342 KBytes
[ 4] 8.00-9.00 sec 50.4 MBytes 423 Mbits/sec 2 352 KBytes
[ 4] 9.00-10.00 sec 52.4 MBytes 440 Mbits/sec 6 355 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 486 MBytes 408 Mbits/sec 234 sender
[ 4] 0.00-10.00 sec 484 MBytes 406 Mbits/sec receiver

iperf Done.

root@svr-01:~# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 2261940 bytes 2287378756 (2.1 GiB)
RX errors 0 dropped 83 overruns 0 frame 0

TX packets 1722591 bytes 1894760691 (1.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp65s0f0: flags=6147<UP,BROADCAST,SLAVE,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 55 memory 0xd0000000-d07fffff

enp65s0f1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 2261940 bytes 2287378756 (2.1 GiB)
RX errors 0 dropped 83 overruns 0 frame 0
TX packets 1722591 bytes 1894760691 (1.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 69 memory 0xd1000000-d17fffff

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 1247 bytes 133340 (130.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1247 bytes 133340 (130.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.11 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::20a:f7ff:fe58:5330 prefixlen 64 scopeid 0x20<link>
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 1078357 bytes 2085005754 (1.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 574943 bytes 1811953949 (1.6 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@svr-01:~# iperf3 -c 192.168.10.32
Connecting to host 192.168.10.32, port 5201
[ 4] local 192.168.100.11 port 46226 connected to 192.168.10.32 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 47.5 MBytes 398 Mbits/sec 51 263 KBytes
[ 4] 1.00-2.00 sec 41.5 MBytes 348 Mbits/sec 7 280 KBytes
[ 4] 2.00-3.00 sec 51.9 MBytes 435 Mbits/sec 65 281 KBytes
[ 4] 3.00-4.00 sec 49.8 MBytes 418 Mbits/sec 45 283 KBytes
[ 4] 4.00-5.00 sec 43.3 MBytes 363 Mbits/sec 15 313 KBytes
[ 4] 5.00-6.00 sec 56.5 MBytes 474 Mbits/sec 24 310 KBytes
[ 4] 6.00-7.00 sec 46.5 MBytes 390 Mbits/sec 2 325 KBytes
[ 4] 7.00-8.00 sec 46.4 MBytes 389 Mbits/sec 17 342 KBytes
ç
[ 4] 8.00-9.00 sec 50.4 MBytes 423 Mbits/sec 2 352 KBytes
Ω
[ 4] 9.00-10.00 sec 52.4 MBytes 440 Mbits/sec 6 355 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 486 MBytes 408 Mbits/sec 234 sender
[ 4] 0.00-10.00 sec 484 MBytes 406 Mbits/sec receiver

iperf Done.

root@svr-01:~# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 2261940 bytes 2287378756 (2.1 GiB)
RX errors 0 dropped 83 overruns 0 frame 0
TX packets 1722591 bytes 1894760691 (1.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp65s0f0: flags=6147<UP,BROADCAST,SLAVE,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 55 memory 0xd0000000-d07fffff

enp65s0f1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 2261940 bytes 2287378756 (2.1 GiB)
RX errors 0 dropped 83 overruns 0 frame 0
TX packets 1722591 bytes 1894760691 (1.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 69 memory 0xd1000000-d17fffff

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 1247 bytes 133340 (130.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1247 bytes 133340 (130.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


vmbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.11 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::20a:f7ff:fe58:5330 prefixlen 64 scopeid 0x20<link>
ether 00:0a:f7:58:53:30 txqueuelen 1000 (Ethernet)
RX packets 1078357 bytes 2085005754 (1.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 574943 bytes 1811953949 (1.6 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@svr-01:~# ethtool enp65s0f1
Settings for enp65s0f1:
Supported ports: [ TP ]
Supported link modes: 100baseT/Half 100baseT/Full
1000baseT/Full
10000baseT/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Advertised link modes: 100baseT/Half 100baseT/Full
1000baseT/Full
10000baseT/Full
Advertised pause frame use: Symmetric Receive-only
Advertised auto-negotiation: Yes
Link partner advertised link modes: 1000baseT/Full
10000baseT/Full
Link partner advertised pause frame use: Symmetric
Link partner advertised auto-negotiation: Yes
Speed: 10,000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 17
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: d
Current message level: 0x00000000 (0)

Link detected: yes
 
Ok, one problem less.. :) So what switch are you using? Have you tried if you reach the speeds without LACP? Also, I suggest you upgrade to the latest kernel.
 
Yea, I didn't mention, I'm not LACP on this connection anymore either, it's a single 10GBe link. I didn't have time last night to do any additional troubleshooting, but it surprised me that I'm not even getting 1G speeds on the link. Next I'll connect on of my Macs on there and make sure it can get the speed on that port/cable.

It's a Ubiquiti Unifi XG-16, and my Proxmox boxes are connected to it's 4 copper ports. They had some issues with their copper ports originally, but my board revision is supposed to be OK, and my Mac Mini was also using a port on this switch and performing OK. I have 10G transceivers I can try also if it looks like it's the port. The Mac and the PVE hosts are on different VLANs, but that should be the only difference in configuration.

What is the latest kernel, so I can check that? I just set these up in the last couple of months, so I shouldn't be far behind. Can I just do a apt update/apt upgrade, or do I have to update the PVE version? Thanks for the support.
 
As general recommendation you should always run `apt update && apt dist-upgrade`! As you upgrade the kernel, a reboot is needed. This should bump you to the latest version of 5.x, fixing some of the issues present in older kernels with some hardware.
Check if you run the latest version on the Ubiquiti Unifi XG-16 (although you mentioned you get higher speeds with your mac, you still don't reach the 10Gbps).
Is your CPU powerful enough to saturate the 10G connection? Maybe try running multiple streams/processes https://fasterdata.es.net/performan...ubleshooting-tools/iperf/multi-stream-iperf3/ (although this is for 40Gbps and beyond...).
Hope this can help you
 
Chris - Thanks, I will do that. I don't think my Macs drives can saturate the link, and I'm not so worried (at least for now-I will prove that to myself later) about getting perfectly 10G, but I'd like to see something in the 5-9G range, and seeing a result that started with "Mega" was breaking my heart. :)
 
I have the Unifi XG-16 with three proxmox servers + one freenas sever connected through SFP+ DAC
I have Vlans configured and am getting full 10g speeds

In proxmox I am using open v switch
- Physical SFP+ link 1 set with MTU 9000
- 3 Vlans as bridge - corosync ring 1, ISCSI Link 1, LAN
- Second Physical SFP+ link set with MTU 9000
- 1 Vlan as bridge - ISCSI Link 2

I really should have another switch so that ISCSI multipath is on different switches, but that is a future upgrade.....
Corosync ring 2 will be on a standard gigabit switch (currently setting up)
 
vshaulsk - Thanks, I believe it's possible, I just need figure out why I'm seeing what I'm seeing. How do you validate your speeds?

Chris - I was really bothered by what I'm seeing with iperf3, based on Plex performance I'm getting from a guest VM to remote users, so I just on a whim logged into one of my VMs remotely and ran this speedtest. It reports 826Mbps from the internet, when iperf doesn't show anything on my LAN above 560Mbps. Of course that's possible, but it seems unlikely. I assume you would expect iperf3 to be a reliable source of data? I'm a little bemused, to be honest.

http://cincinnatibell.speedtestcustom.com/result/a1293770-2bc7-11e9-afa3-259414aa1c61
 
I have validated with iperf3

However, the real tests have been through my Windows 10 VM and also I have an ubuntu VM running Phoronix test suit

Windows VM is running local on R620 proxmox node-3 (local storage LVM-thin - Raid 5 - 8 x Intel S3610 SSD's) and I am moving large files SMB/CIFS to my Freenas (1 gigabyte to 150 gigabytes in size)
- When there is no other heavy traffic, I max out 10g to my Freenas ...... from Freenas to Proxmox it is slower (70 to 75%) of 10g speed.

- Ubuntu Phoronix test suite - I use to test my multipath ISCSI connection: Here writes are max between 700 MB/s to 900 MB/s (depends on the test) (Storage used during last test: Freenas V11.2 running on R620 with a Netapp disk shelf attached through LSI9300-8e (12 Gb/s HBA) - 16x4TB 7200 RPM Sata drives setup in 8x2 disk mirrors.

Dropped packets: I have netdata running on all hosts and it sends me emails about dropped packets ( .12%) and also sends me some disk_backlog emails. Which maybe normal, I need to learn more and understand this better.

However, from a user standpoint everything feels really fast. I actually find that now I use my windows VM through remote desktop for the majority of the things I do. Other than graphics performance which I don't really care about, the system feels faster. I like sitting on the couch using my work provided laptop, while really doing all the work through the VM.
 
OK, so I now know WHAT is happening, but I'm not sure what to do about it. It's the USG XG-8 router's IPS function. It's a 10G capable router that's supposed to be able do full DPI and IPS at 1G, so I don't know why I'm able to get 1G to the internet, but interVLAN routes are getting throttled at about 600M. I also don't see anyway in the current controller to tell it to not inspect interVLAN traffic.

So, if my test machines are on the same subnet, the PVE host, or if I turn of IPS, I get 7.8 Gbps not problem. I get a little less (6.7) to the guest OS, which is a bit odd, but within my current acceptble range. Mostly just leaving this for folks in the future. I may try to tune it more later, but for now I suppose I'll leave the IPS off.

To PVE host
root@svr-01:~# iperf3 -c 192.168.10.113
Connecting to host 192.168.10.113, port 5201
[ 4] local 192.168.100.11 port 50652 connected to 192.168.10.113 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 918 MBytes 7.70 Gbits/sec 4344 400 KBytes
[ 4] 1.00-2.00 sec 928 MBytes 7.78 Gbits/sec 5538 436 KBytes
[ 4] 2.00-3.00 sec 922 MBytes 7.74 Gbits/sec 4362 762 KBytes
[ 4] 3.00-4.00 sec 951 MBytes 7.98 Gbits/sec 4351 567 KBytes
[ 4] 4.00-5.00 sec 926 MBytes 7.77 Gbits/sec 5288 512 KBytes
[ 4] 5.00-6.00 sec 936 MBytes 7.86 Gbits/sec 5169 465 KBytes
[ 4] 6.00-7.00 sec 934 MBytes 7.83 Gbits/sec 5734 467 KBytes
[ 4] 7.00-8.00 sec 919 MBytes 7.71 Gbits/sec 4401 997 KBytes
[ 4] 8.00-9.00 sec 931 MBytes 7.81 Gbits/sec 5769 570 KBytes
[ 4] 9.00-10.00 sec 949 MBytes 7.96 Gbits/sec 4498 469 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 9.10 GBytes 7.81 Gbits/sec 49454 sender
[ 4] 0.00-10.00 sec 9.09 GBytes 7.81 Gbits/sec receiver

iperf Done.

To VM/guest
Connecting to host 192.168.10.113, port 5201
[ 4] local 192.168.100.31 port 56586 connected to 192.168.10.113 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 772 MBytes 6.47 Gbits/sec 1873 662 KBytes
[ 4] 1.00-2.00 sec 758 MBytes 6.35 Gbits/sec 851 574 KBytes
[ 4] 2.00-3.00 sec 800 MBytes 6.71 Gbits/sec 1086 732 KBytes
[ 4] 3.00-4.00 sec 808 MBytes 6.77 Gbits/sec 1304 584 KBytes
[ 4] 4.00-5.00 sec 759 MBytes 6.37 Gbits/sec 682 570 KBytes
[ 4] 5.00-6.00 sec 805 MBytes 6.75 Gbits/sec 1378 570 KBytes
[ 4] 6.00-7.00 sec 784 MBytes 6.57 Gbits/sec 623 800 KBytes
[ 4] 7.00-8.00 sec 760 MBytes 6.38 Gbits/sec 834 614 KBytes
[ 4] 8.00-9.00 sec 808 MBytes 6.77 Gbits/sec 1054 624 KBytes
[ 4] 9.00-10.00 sec 795 MBytes 6.67 Gbits/sec 1294 587 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 7.66 GBytes 6.58 Gbits/sec 10979 sender
[ 4] 0.00-10.00 sec 7.66 GBytes 6.58 Gbits/sec receiver

iperf Done.
 
I have validated with iperf3

However, the real tests have been through my Windows 10 VM and also I have an ubuntu VM running Phoronix test suit

Windows VM is running local on R620 proxmox node-3 (local storage LVM-thin - Raid 5 - 8 x Intel S3610 SSD's) and I am moving large files SMB/CIFS to my Freenas (1 gigabyte to 150 gigabytes in size)
- When there is no other heavy traffic, I max out 10g to my Freenas ...... from Freenas to Proxmox it is slower (70 to 75%) of 10g speed.

- Ubuntu Phoronix test suite - I use to test my multipath ISCSI connection: Here writes are max between 700 MB/s to 900 MB/s (depends on the test) (Storage used during last test: Freenas V11.2 running on R620 with a Netapp disk shelf attached through LSI9300-8e (12 Gb/s HBA) - 16x4TB 7200 RPM Sata drives setup in 8x2 disk mirrors.

Dropped packets: I have netdata running on all hosts and it sends me emails about dropped packets ( .12%) and also sends me some disk_backlog emails. Which maybe normal, I need to learn more and understand this better.

However, from a user standpoint everything feels really fast. I actually find that now I use my windows VM through remote desktop for the majority of the things I do. Other than graphics performance which I don't really care about, the system feels faster. I like sitting on the couch using my work provided laptop, while really doing all the work through the VM.
Can you tell me what model/brand 10gb card you are using? I am trying to figure what card would be compatible with my Dell R620 and Proxmox. I have be searching for a list or recommendations on sfp+ cards for Proxmox but can’t find any strait forward examples.
 
I am using 3 different cards between my R710, 620 and R720 servers:
1) Built in card for Dell R620 ( C63DV 0C63DV DELL X520/I350 DAUGHTER CARD 10GBE NETWORK )
2) Add in Intel card: Intel X520-DA2 10Gb 10Gbe 10 Gigabit Network Adapter
3) Add in Mellanox card: Mellanox ConnectX-3 EN CX312A Dual Port 10 Gigabit (Also you can use the single port connect-x3)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!