Hi there,
I'm trying to test the various bonds that I have configured. I will try to be as exhaustive as I can be for the setup. I have 4x nodes in total:
On Proxmox side, all the nodes are configured the same way as well:
Balancing traffic in (NAS > NUC)
Using 2x iperf clients on the NAS to 2x iperf servers with different ports on the NUC => This works well, I can see a total of 4.8G in the switch UI.
Balancing traffic out (NUC > NAS)
Using 2x iperf clients on the NUC to 2x iperf servers with different ports on the NAS => This does not work as traffic goes out via the same network interface on the NUC. The overall bandwidth is shared.
Any idea what is going here?
Thanks,
D.
I'm trying to test the various bonds that I have configured. I will try to be as exhaustive as I can be for the setup. I have 4x nodes in total:
- NAS bonded to MikroTik CRS317 (2x10G SFP+)
- MikroTik CRS317 bonded to MikroTik CRS310 (2x10G SFP+)
- Each of the 3x NUCs bonded to MikroTik CRS310 (2x2.5G)
On Proxmox side, all the nodes are configured the same way as well:
Code:
root@pve-nuc12-3:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto enp113s0
iface enp113s0 inet manual
auto enp114s0
iface enp114s0 inet manual
auto bond0
iface bond0 inet manual
bond-slaves enp113s0 enp114s0
bond-miimon 100
bond-mode 802.3ad
bond-downdelay 200
bond-updelay 200
bond-xmit-hash-policy layer3+4
bond-lacp_rate fast
auto vmbr0
iface vmbr0 inet static
address 192.168.10.213/24
gateway 192.168.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
source /etc/network/interfaces.d/*
Balancing traffic in (NAS > NUC)
Using 2x iperf clients on the NAS to 2x iperf servers with different ports on the NUC => This works well, I can see a total of 4.8G in the switch UI.
Balancing traffic out (NUC > NAS)
Using 2x iperf clients on the NUC to 2x iperf servers with different ports on the NAS => This does not work as traffic goes out via the same network interface on the NUC. The overall bandwidth is shared.
Any idea what is going here?
Thanks,
D.
Last edited: