Anyone using MPTCP between proxmox cluster nodes in production and "forcing" MPTCP sockets on everything?

Giovanni

Renowned Member
Apr 1, 2009
113
12
83
I am curious about linux-kernel native TCP Multipath (not the proxmox iSCSI multipath-tools approach).Anyone here using MPTCP forcing all PVE nodes to open MPTCP sockets instead of TCP?

Tentatively, my homelab with PVE cluster nodes with dual SFP+ 10gb uplinks on the same network could in theory transfer more data than the maximum speed of a single uplink for a TCP flow.
  • Hypothesis: If MPTCP could be enabled on all PVE nodes in my cluster easily, a VM #1 on cluster_nodeA should be able to break 10Gb/s with traffic towards VM #2 running on cluster_nodeB... since the uplink between cluster_nodeA and _nodeB is 20Gb/s (dual SFP+s).

Thoughts?