I have 2 vmbr bridges in Proxmox: vmbr0 is the default and on a standard gigabit ethernet cable. IP is 192.168.1.10
vmbr2 is using a bonded 10G NIC, IP is 192.168.1.20
The problem is I want to use ZFS and send a dataset to another system, which I should use on root on the host. So when I connect to SSH on the IP of the 10G bonded bridge (192.168.1.20) I somehow still keep the 1G speeds. Testing with iperf3 I max out at 980Mbps while SSHed into 192.168.1.20 (the 10G NIC bridge). When I do the same iperf3 test on an LXC container through the vmbr2 10G bridge, I max out the 10G connection. I don't understand why this is the case but I have reproduced this on other Proxmox systems as well.
My goal here is to ssh into Proxmox as
How can I do this?
vmbr2 is using a bonded 10G NIC, IP is 192.168.1.20
The problem is I want to use ZFS and send a dataset to another system, which I should use on root on the host. So when I connect to SSH on the IP of the 10G bonded bridge (192.168.1.20) I somehow still keep the 1G speeds. Testing with iperf3 I max out at 980Mbps while SSHed into 192.168.1.20 (the 10G NIC bridge). When I do the same iperf3 test on an LXC container through the vmbr2 10G bridge, I max out the 10G connection. I don't understand why this is the case but I have reproduced this on other Proxmox systems as well.
My goal here is to ssh into Proxmox as
root@192.168.1.20
and have all traffic flow through vmbr2, rather than vmbr0, when doing this ZFS send backup. This in turn will make the ZFS send much faster.How can I do this?