Hello,
I have two dell servers, a R730XD and a R720XD both with quad SFP 10Gb/s ports connect to a single Cisco Nexus 3172T on two QSFP+ ports. The 4 ports on each server are placed in a LAG doing LACP and hashed at layer2+3. The Cisco port channel is a trunk carrying the same vlans to each server. All ports are set an MTU of 9000 on the switch and both servers. I have previously had these servers both running TrueNas Scale, with one acting as a periodic backup to the other. The backups would generally limit themselves to one cable of lag and transfer at about 9.1Gb/s. I have converted the R730XD to a Proxmox VE 8.0.4, installed TruNas Scale as a VM with ownership of the HBA controlling the server's hard drives and one virtio network interface. I have enabled multique of 12 to match my VCPU and used ethtool -L ens18 combined 12 to notify the VM of the multique. Now when I try to backup my TrueNas Scale VM to the bare-metal TrueNas Scale server the transfer never exceeds 550Mb/s. Any ideas why the traffic is so slow?
R730XD
72 x Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (2 Sockets)
384GB Ram
HBA 330mini in passthrough mode
SRV-IO Enabled
Interface config:
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
mtu 9000
auto eno2
iface eno2 inet manual
mtu 9000
auto eno3
iface eno3 inet manual
mtu 9000
auto eno4
iface eno4 inet manual
mtu 9000
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2 eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
mtu 9000
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4092
mtu 9000
auto vmbr0.100
iface vmbr0.100 inet static
address 10.9.240.24/24
gateway 10.9.240.1
mtu 9000
#Lan
auto vmbr0.200
iface vmbr0.200 inet manual
#Public
auto vmbr0.300
iface vmbr0.300 inet manual
mtu 9000
#WiFi
auto vmbr0.400
iface vmbr0.400 inet manual
mtu 9000
#Private
auto vmbr0.500
iface vmbr0.500 inet manual
mtu 9000
#DMZ
TrueNas Scale VM
12 vCPU (2 sockets, 6 cores each)
128 GB of Ram
I have two dell servers, a R730XD and a R720XD both with quad SFP 10Gb/s ports connect to a single Cisco Nexus 3172T on two QSFP+ ports. The 4 ports on each server are placed in a LAG doing LACP and hashed at layer2+3. The Cisco port channel is a trunk carrying the same vlans to each server. All ports are set an MTU of 9000 on the switch and both servers. I have previously had these servers both running TrueNas Scale, with one acting as a periodic backup to the other. The backups would generally limit themselves to one cable of lag and transfer at about 9.1Gb/s. I have converted the R730XD to a Proxmox VE 8.0.4, installed TruNas Scale as a VM with ownership of the HBA controlling the server's hard drives and one virtio network interface. I have enabled multique of 12 to match my VCPU and used ethtool -L ens18 combined 12 to notify the VM of the multique. Now when I try to backup my TrueNas Scale VM to the bare-metal TrueNas Scale server the transfer never exceeds 550Mb/s. Any ideas why the traffic is so slow?
R730XD
72 x Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (2 Sockets)
384GB Ram
HBA 330mini in passthrough mode
SRV-IO Enabled
Interface config:
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
mtu 9000
auto eno2
iface eno2 inet manual
mtu 9000
auto eno3
iface eno3 inet manual
mtu 9000
auto eno4
iface eno4 inet manual
mtu 9000
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2 eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
mtu 9000
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4092
mtu 9000
auto vmbr0.100
iface vmbr0.100 inet static
address 10.9.240.24/24
gateway 10.9.240.1
mtu 9000
#Lan
auto vmbr0.200
iface vmbr0.200 inet manual
#Public
auto vmbr0.300
iface vmbr0.300 inet manual
mtu 9000
#WiFi
auto vmbr0.400
iface vmbr0.400 inet manual
mtu 9000
#Private
auto vmbr0.500
iface vmbr0.500 inet manual
mtu 9000
#DMZ
TrueNas Scale VM
12 vCPU (2 sockets, 6 cores each)
128 GB of Ram