Hy,
On 2 proxmox servers, I bonded 2 network ports (2x1gbps) with lacp, created it in the switch, and created a vmbr on top of it. Both servers are connected to the same switch, I expected a speed increase but it still looks limited to 1gbps when doing transfers between them (fe zfs replication).
Config for both is the same and looks like:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 192.168.0.191
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes
The proxmox management interface is created on the vmbr (192.168.0.191). I did see examples where the interface is added to the bond, but i don't see any reason at the moment why that would make a difference?
Kind regards
On 2 proxmox servers, I bonded 2 network ports (2x1gbps) with lacp, created it in the switch, and created a vmbr on top of it. Both servers are connected to the same switch, I expected a speed increase but it still looks limited to 1gbps when doing transfers between them (fe zfs replication).
Config for both is the same and looks like:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 192.168.0.191
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes
The proxmox management interface is created on the vmbr (192.168.0.191). I did see examples where the interface is added to the bond, but i don't see any reason at the moment why that would make a difference?
Kind regards