Routing to NFS share

bb8

New Member
Sep 6, 2023
1
0
1
Hi Everyone,

first of all thanks for letting me register here. As I'm new to Proxmox and networking< i have the following issue. I have two servers here with the following network configs:

Backup - server with NFS share

auto lo
iface lo inet loopback

iface eno1 inet manual
dns-nameservers 1.1.1.1 1.1.1.2 9.9.9.9
# dns-* options are implemented by the resolvconf package, if installed

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.10.150/24
gateway 192.168.10.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

auto enp97s0
iface enp97s0 inet static
address 172.20.20.100
netmask 255.255.255.0
up ip route add 172.20.20.102/32 dev enp97s0
down ip route del 172.20.20.102/32

auto enp97s0d1
iface enp97s0d1 inet static
address 172.20.20.100
netmask 255.255.255.0
up ip route add 172.20.20.101/32 dev enp97s0d1
down ip route del 172.20.20.101/
and client
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual
dns-nameservers 192.168.10.10

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.10.140/24
gateway 192.168.10.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

auto ens6
iface ens6 inet static
address 172.20.20.101
netmask 255.255.255.0
up ip route add 172.20.20.100/32 dev ens6
down ip route del 172.20.20.100/32

auto ens6d1
iface ens6d1 inet static
address 172.20.20.101
netmask 255.255.255.0
up ip route add 172.20.20.102/32 dev ens6d1
down ip route del 172.20.20.102/32

I set up a NFS share on the Backup server in order to be able to back up vm's from the client.

/etc/exports
/tank/wncs/backup/pve/dump/backup_share 172.20.20.101(rw,sync,no_subtree_check)

and mount the share on the client.

/etc/fstab
172.20.20.100:/tank/wncs/backup/pve/dump/backup_share /mnt/backup_share nfs defaults 0 0

which gives me

172.20.20.100:/tank/wncs/backup/pve/dump/backup_share on /mnt/backup_share type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.20.20.101,local_lock=none,addr=172.20.20.100)

However, whenever I backup a vm from the client, I only get 180-220 MiB/s max while the 172.20.20.100 and 172.20.20.101 are connected directly between 2x Mellanox Connect-X3 MCX254A via DAC (40Gbps)
INFO: creating vzdump archive '/mnt/backup_share/dump/vzdump-qemu-8001-2023_09_06-13_29_22.vma.zst'
INFO: starting kvm to execute backup task
WARN: iothread is only valid with virtio disk or virtio-scsi-single controller, ignoring
INFO: started backup task '42036af7-d252-411d-9ae0-4640861c8b36'
INFO: 0% (420.5 MiB of 250.0 GiB) in 3s, read: 140.2 MiB/s, write: 125.0 MiB/s
INFO: 1% (2.7 GiB of 250.0 GiB) in 13s, read: 230.2 MiB/s, write: 123.0 MiB/s
INFO: 2% (5.2 GiB of 250.0 GiB) in 26s, read: 196.9 MiB/s, write: 146.8 MiB/s
INFO: 3% (7.6 GiB of 250.0 GiB) in 39s, read: 190.5 MiB/s, write: 184.5 MiB/s
INFO: 4% (10.1 GiB of 250.0 GiB) in 51s, read: 217.5 MiB/s, write: 208.9 MiB/s
INFO: 5% (12.6 GiB of 250.0 GiB) in 1m 3s, read: 215.4 MiB/s, write: 206.2 MiB/s

I expected 10x higher transfer speeds than what I receive now and I'm wondering if this is related to the networking setup or storage hardware.

My storage hardware and/or ZFS setup with the NFS share is:
pool: tank
state: ONLINE
scan: scrub repaired 0B in 02:34:48 with 0 errors on Sun Sep 3 17:37:12 2023
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
scsi-35000cca03e942fb4 ONLINE 0 0 0
scsi-35000cca058181a3c ONLINE 0 0 0
scsi-35000cca046251640 ONLINE 0 0 0
scsi-35000cca046295678 ONLINE 0 0 0
scsi-35000cca03ec0b8a0 ONLINE 0 0 0
scsi-35000cca03ec0b814 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
scsi-35000cca0461c3d18 ONLINE 0 0 0
scsi-35000cca0461c614c ONLINE 0 0 0
scsi-35000cca0461ce014 ONLINE 0 0 0
scsi-35000cca0461b6d48 ONLINE 0 0 0
scsi-35000cca0461af3c0 ONLINE 0 0 0
scsi-35000cca03ec0d9a4 ONLINE 0 0 0
logs
mirror-6 ONLINE 0 0 0
fioa1 ONLINE 0 0 0
fiob1 ONLINE 0 0 0
cache
fioa3 ONLINE 0 0 0
fiob3 ONLINE 0 0 0

errors: No known data errors
root@bup1:~#
The devices for the log and cache are 2x Sandisk FusionIO 2 765G.

Any hint would be much appreciated. Thank you