TrueNAS -> ProxMox slow network and read/write performance

AreUSirius

New Member
May 28, 2021
11
0
1
28
Hey there,

i tested and googled some settings and got a little push up for write/read performance but its not that good. Via iperf i cant even get higher than 1.7 mbits/sec from the storage to the proxmox ve.

My TrueNAS Server:

TrueNAS Core 12.0-U5.1

HP DL380e Gen8

Intel Xeon CPU E5-2450L

128 GiB RAM

Intel X520-DA2 Dual-Port 2x 10GbE-LAN SFP+ PCIe x8 E10G42BTDASATA (single used)

HBA LSI Megaraid 9201-8i 6G PCIe x8 Gebraucht IT Mode ZFS FreeNAS 6Gbps

The Server uses an Hard Drive Backplane Board 12x 3,5" SAS from HP itself

I have a Pool with 6x 8TB WD Red Pro NAS Drives and they are in 3vdevs mirrored

I use a SLOG Device:

AMPCOM M.2 NVME SSD auf PCIe 4.0 X4

Samsung 980 PRO 1 TB PCIe 4.0



Pool-Settings:



r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance


r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance

Network/Interface at test:



r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance


r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance


ProxMox Cluster:

Network is bridged from the physical ones.

1631534180095.png


1631534132919.png


1631533952025.png

1631533964346.png

1631533977917.png
1631533997828.png

All of them are using 10Gbit NIC's

Storage is via NFS connected but tbh my write/read is pretty slow. At the TrueNAS Server is the whole VM saved. (VZDump,ISO,Disk Image, Container)

Switch where they're connected:

UniFi Switch 16XG 10G 16-Port

https://www.notebooksbilliger.de/ubiquiti+unifiswitch+16+port

--- --- ---

IOzone: https://pastebin.com/KLC1VvNV


TrueNAS dd test:

dd if=/dev/zero of=/mnt/ST-01-HWDEKA7/st1-proxmox-vms/test.dat bs=2048k count=1000010000+0 records in10000+0 records out20971520000 bytes transferred in 7.992925 secs (2623760479 bytes/sec)

[~]# dd of=/dev/null if=/mnt/ST-01-HWDEKA7/st1-proxmox-vms/test.dat bs=2048k count=1000010000+0 records in10000+0 records out20971520000 bytes transferred in 3.901468 secs (5375288884 bytes/sec)

iperf from ProxMox to TrueNAS:


r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance

iperf from TrueNAS to ProxMox:

r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance

This was tested at a Windows VM:



r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance


r/Proxmox - TrueNAS NFS -> ProxMox Slow write/read performance

So i can't really get over a gigabit and i don't know why
 

Attachments

  • 1631533990185.png
    1631533990185.png
    40.2 KB · Views: 4
  • 1631534173982.png
    1631534173982.png
    9.6 KB · Views: 5
Ok you added a lot of great content, but a couple of things strike me as odd. Why are two of your nodes swapping so much?

Have you monitored how busy Proxmox is? And can you upload a network topology for us to see? Can you list any firewall settings along with an IP table? Are all of your PVE nodes communicating with each other at link speeds? Do all of your nodes have 10Gbps NICs? (I see one node with an E5645 for example which is getting a bit older and does not have much IO throughput for example, is it working with the 10gbps cards nicely?).

I have a similar (albeit with way less RAM) Setup at multiple sites with TrueNAS and PVE, none of my clusters give me trouble and I'm running similar HPE gear. Even if it was non-standard gear, there's very few hardware reasons for 1.72Mbps upload so there must be something wrong with a driver or network configuration.
 
Ok you added a lot of great content, but a couple of things strike me as odd. Why are two of your nodes swapping so much?

Have you monitored how busy Proxmox is? And can you upload a network topology for us to see? Can you list any firewall settings along with an IP table? Are all of your PVE nodes communicating with each other at link speeds? Do all of your nodes have 10Gbps NICs? (I see one node with an E5645 for example which is getting a bit older and does not have much IO throughput for example, is it working with the 10gbps cards nicely?).

I have a similar (albeit with way less RAM) Setup at multiple sites with TrueNAS and PVE, none of my clusters give me trouble and I'm running similar HPE gear. Even if it was non-standard gear, there's very few hardware reasons for 1.72Mbps upload so there must be something wrong with a driver or network configuration.

swap got fixed. Don't really know why it was that high.

Actually i don't have a network topology but its simple. they are all connected to the same switch and this goes to another 10Gbit switch and then into a gateway all from UniFi.


Server 10Gbit NIC -->

Server 10Gbit NIC -->

Server 10Gbit NIC --> UniFi Switch 10Gbit --> UniFi Switch 10Gbit --> Unifi Gateway 1Gbit --> WAN

Server 10Gbit NIC -->

Server 10Gbit NIC -->

@hwdeka1:~$ sudo iptables -L -v -n | more
Chain INPUT (policy ACCEPT 24446 packets, 6213K bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 22651 packets, 4444K bytes)
pkts bytes target prot opt in out source destination

@hwdeka1:~$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

@hwdeka1:~$ lspci | grep Network
01:00.0 Ethernet controller: Intel Corporation 82575EB Gigabit Network Connection (rev 02)
01:00.1 Ethernet controller: Intel Corporation 82575EB Gigabit Network Connection (rev 02)
04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

@hwdeka1:~$ sudo find /sys | grep drivers.*04:00
/sys/bus/pci/drivers/ixgbe/0000:04:00.1
/sys/bus/pci/drivers/ixgbe/0000:04:00.0


@hwdeka1:~$ sudo ethtool enp4s0f0
Settings for enp4s0f0:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

@hwdeka1:~$ sudo ethtool enp4s0f1
Settings for enp4s0f1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

@hwdeka3:~$ sudo iptables -L -v -n | more
Chain INPUT (policy ACCEPT 11G packets, 67T bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 8236M packets, 24T bytes)
pkts bytes target prot opt in out source destination

@hwdeka3:~$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

@hwdeka3:~$ lspci | grep Network
02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

@hwdeka3:~$ sudo find /sys | grep drivers.*02.00
/sys/bus/pci/drivers/ixgbe/0000:02:00.0
/sys/bus/pci/drivers/ixgbe/0000:02:00.1

@hwdeka3:~$ sudo ethtool ens3f0
Settings for ens3f0:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

@hwdeka3:~$ sudo ethtool ens3f1
Settings for ens3f1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes


@hwdeka6:~$ sudo iptables -L -v -n | more
Chain INPUT (policy ACCEPT 15G packets, 92T bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 7173M packets, 73T bytes)
pkts bytes target prot opt in out source destination

@hwdeka6:~$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

@hwdeka6:~$ lspci | grep Ethernet
04:00.0 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.1 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.2 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.3 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
05:00.0 Ethernet controller: QLogic Corp. cLOM8214 1/10GbE Controller (rev 54)
05:00.1 Ethernet controller: QLogic Corp. cLOM8214 1/10GbE Controller (rev 54)

@hwdeka6:~$ sudo find /sys | grep drivers.*05:00
/sys/bus/pci/drivers/qlcnic/0000:05:00.1
/sys/bus/pci/drivers/qlcnic/0000:05:00.0

@hwdeka6:~$ sudo ethtool ens7f0
Settings for ens7f0:
Supported ports: [ TP FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000000 (0)

Link detected: yes

@hwdeka6:~$ sudo ethtool ens7f1
Settings for ens7f1:
Supported ports: [ TP FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000000 (0)

Link detected: yes


@hwdeka8:/$ sudo iptables -L -v -n | more
Chain INPUT (policy ACCEPT 2760M packets, 18T bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 1769M packets, 7577G bytes)
pkts bytes target prot opt in out source destination

@hwdeka8:/$ sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

@hwdeka8:/$ lspci | grep Ethernet
04:00.0 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.1 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.2 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.3 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
0b:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
0b:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

@hwdeka8:/$ sudo find /sys | grep drivers.*0b:00
/sys/bus/pci/drivers/ixgbe/0000:0b:00.0
/sys/bus/pci/drivers/ixgbe/0000:0b:00.1

@hwdeka8:/$ sudo ethtool ens9f0
Settings for ens9f0:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

@hwdeka8:/$ sudo ethtool ens9f1
Settings for ens9f1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

when i run a iperf test from node to node its pretty slow


@hwdeka6:~$ iperf -c xx.xx.10.201
------------------------------------------------------------
Client connecting to xx.xx.10.201, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local xx.xx.10.206 port 49960 connected with xx.xx.10.201 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.2 sec 636 KBytes 513 Kbits/sec


@hwdeka1:~$ iperf -c xx.xx.10.206
------------------------------------------------------------
Client connecting to xx.xx.10.206, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local xx.xx.10.201 port 49678 connected with xx.xx.10.206 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.4 sec 639 KBytes 505 Kbits/sec

The one server with the E5645 cpu is not running any vm's
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!