When doing several disk moves from one NFS storage to another, i noticed that my FreeNas 10Gb NIC ( For NFS ) was capping out at 1Gbps.
I did some digging and found that ProxMox is using en1/vmbr0 to facilitate the data move.
How can i configure ProxMox to use the 10Gb links when performing a "Move Disk" action?
ProxMox config:
Fresh install PVE 5
en1 (1Gb NIC) slaved to vmbr0, used for Management. 172.16.10.x Network
en6 (1Gb NIC) slaved to vmbr1, used for VM traffic. 172.16.8.x Network
en7 (10Gb NIC) configured for NFS Storage #1 10.10.10.x Network (NFS Traffic Only)
en8 (10Gb NIC) configured for NFS Storage #2 10.10.20.x Network (NFS Traffic Only)
FreeNAS Config:
1Gb NIC Management 172.16.10.x Network
10Gb NIC SMB Traffic 172.16.8.x Network
10Gb NIC NFS Traffic #1 10.10.10.x Network (NFS Traffic Only)
10Gb NIC NFS Traffic #2 10.10.20.x Network (NFS Traffic Only)
I did some digging and found that ProxMox is using en1/vmbr0 to facilitate the data move.
How can i configure ProxMox to use the 10Gb links when performing a "Move Disk" action?
ProxMox config:
Fresh install PVE 5
en1 (1Gb NIC) slaved to vmbr0, used for Management. 172.16.10.x Network
en6 (1Gb NIC) slaved to vmbr1, used for VM traffic. 172.16.8.x Network
en7 (10Gb NIC) configured for NFS Storage #1 10.10.10.x Network (NFS Traffic Only)
en8 (10Gb NIC) configured for NFS Storage #2 10.10.20.x Network (NFS Traffic Only)
FreeNAS Config:
1Gb NIC Management 172.16.10.x Network
10Gb NIC SMB Traffic 172.16.8.x Network
10Gb NIC NFS Traffic #1 10.10.10.x Network (NFS Traffic Only)
10Gb NIC NFS Traffic #2 10.10.20.x Network (NFS Traffic Only)