Hi!
I'm looking for a way to speed up migration between 2 servers. I bought 2 Quad Port GbE cards and wanted to bond the 4 ports. The setup in Proxmox was easy enough and I could get the connection established, but the resulting speed just about 1.2 Gb/s while I was expecting at least 3.5 Gb/s.
The way I setup is as follows:
Linux Bond - balance-rr, I enter the 4 slaves: enp1s0f0 enp1s0f1 enp1s0f2 enp1s0f3. MTU 9000.
Then I add the bond to the bridge ports: eno1 bond0
I do the same on the other server. I connect directly with UTP cables (no switch).
I remove the cable that was eno1.
It works, but barely faster than a single NIC 1.2 Gb/s.
I have also done bonding using VyOS inside virtual machines where I pass-through the same NIC's. Then I get around 2.2 Gb/s. Still slower than expected but twice the speed I get on the host machine. So the hardware should be capable.
Any ideas where to look for a solution?
I'm looking for a way to speed up migration between 2 servers. I bought 2 Quad Port GbE cards and wanted to bond the 4 ports. The setup in Proxmox was easy enough and I could get the connection established, but the resulting speed just about 1.2 Gb/s while I was expecting at least 3.5 Gb/s.
The way I setup is as follows:
Linux Bond - balance-rr, I enter the 4 slaves: enp1s0f0 enp1s0f1 enp1s0f2 enp1s0f3. MTU 9000.
Then I add the bond to the bridge ports: eno1 bond0
I do the same on the other server. I connect directly with UTP cables (no switch).
I remove the cable that was eno1.
It works, but barely faster than a single NIC 1.2 Gb/s.
I have also done bonding using VyOS inside virtual machines where I pass-through the same NIC's. Then I get around 2.2 Gb/s. Still slower than expected but twice the speed I get on the host machine. So the hardware should be capable.
Any ideas where to look for a solution?