TrueNAS Scale (Virtual) unable to use 10Gb

sdamaged99

Member
Nov 6, 2021
8
1
8
45
Hi chaps,

Having a strange issue where my Intel T540-X2 nic won't seem to show it's full capabilities inside my TrueNAS Scale VM. Its using a Virtio driver. connected to a 10Gb switch.

Ethtool on Proxmox host shows the correct linkmode, duplex and speed. Ethtool inside TrueNAS shows everything as unknown, and seems to be stuck at 1Gb speeds when transferring data from my PC (also using the same nic at 10Gb)

qm config shows - net0: virtio=FE:E0:94:C4:50:8C,bridge=vmbr1,queues=4,tag=60

Inside the host, ethtool shows everything as "not reported"

Virtio drivers are installed by default inside Scale

This is from TrueNAS

1695121459977.png

This is the host nic (bridged)


1695121533936.png

Nothing particularly special about the setup, standard MTU on everything, no jumbo frames etc

iperf results to VM from my 10Gb desktop


1695122023050.png

Disk array inside TrueNAS is 3 x 8 (14TB disks) RAIDZ2 vdevs. So plenty to saturate a 10Gb link from my local nvme SSD

Any ideas?

many thanks

EDIT - Just to add, 10Gb worked fine until i virtualised the system
 
Last edited:
  • Like
Reactions: meinsks
the virtual nic doesn't really have a linespeed - after all, there are no cables involved ;) so the question is rather - where is your bottle neck coming from? could you post the full VM config and "pveversion -v" output?
 
I assume the passed-through device is your HBA/scsi controller?

you could try
- adding more cores
- adding more queues on the vNIC
- checking out with atop inside the VM and on the host while doing a benchmark to see whether some resource bottoms out..
 
No joy, upped cores to 8 from 4 and increased queues to 8, still capping out at 115MB/s, definitely stuck at gigabit for some reason

Nothing seems to be bottlenecking on the host or the VM, RAM and CPU usage all fine. Seems to be an issue with the network card passthrough
 
Last edited:
potentially stupid question - what about the PVE host itself? i.e. iperf between the hypervisor and your other machine, and between the hypervisor and the VM might be interesting..
 
  • Like
Reactions: _gabriel
Proxmox host connection is 1Gb, but i have the two 10Gb ports each in their own bridge. One of these is assigned to TrueNAS

Not sure how i can test iperf via one of the other (bridge) interfaces, if you know please let me know
 
can you show us how your bridges are configured?

for example a screenshot from the network section inside the proxmox gui?

im willing to bet there is an issue hidden in the configuration somewhere.
 
Similar problem here with TrueNAS core. 10Gbit SFP+ with jumboframes and 8 queues and virtio was much more performant running bare metal in comparison to virtualized on PVE with same hardware. But here it is still more than Gbit.
 
Entirely possible its a network configuration issue my side, i'm still quite new to Proxmox

Basically i have an onboard 1Gb NIC i use for the host (didn't see the point of using a 10Gb port for that), and then a dual port 10Gb NIC i use for the VMs.

1695145622018.png
 

Attachments

  • 1695145647613.png
    1695145647613.png
    81.9 KB · Views: 24
Desktop is 10.200.40.30 and truenas is 10.200.60.20

The 1Gb connection for the proxmox host goes into a layer 3 UniFi switch and the two single 10Gb cables go into a 10Gb layer 2 network switch
 
Last edited:
try removing vmbr1 then set enp5 to vmbr0 excluding enp7 to confirm there isn't wrong flow.
 
the layer 3 switch is doing the routing i assume?
would it be possible, just for testing, to put the desktop on the same subnet/vlan as the truenas and then see how the speed is?
 
Hmm i wonder if it's due to traffic traversing the layer 3 switch at 1Gb due to the VLANs and thus limiting the speed.

Ill try those options above and feed back, thanks a lot

Also my pfsense router is only using a 1Gb connection too, so that could also be an issue?
 
Last edited:
Bah. Edited as still not working

Also i found out that my Unifi Switch is only layer 2. Should have known that really due to a lack of anything IP related mentioned!

Ill explain the current layout in a bit of detail in case i'm not being very clear

1. PC, proxmox bridge NIC both connected direct to Netgear 10Gb switch.
2. fibre cable from 10Gb switch to Unifi switch (1Gb uplink only)
3. Pfsense LAN connected to 10Gb switch

I put my desktop on the same vlan as my server and it was slightly quicker (150MB/s vs 115MB/s) copying a large file from the array to my nvme ssd, but nowhere near what i used to see bare metal (700MB/s-1GB/s)

EDIT - Ok so this is very interesting

I copied a file from my array to my nvme and was only seeing 150MB/s.

I then deleted the file from my array and copied it back (connected to the server vlan) and saw 900MB/s
When i tried again from my other VLAN i only saw 400MB/s - so perhaps there is something affecting performance somewhere. Still its better than 115MB/s
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!