Yeah. When I had the same question I found an older answer that said all replication traffic goes over vmbr0. So people would need to adjust their network config to have vmbr0 be the replication traffic bridge, and the public traffic would then come in over a newly created vmbr1.
So that's what I set up, and it works fine.
Later on I found out there's also an option in the
/etc/pve/datacenter.cfg
file called `migration`, which controls the network all of your migration traffic goes over:
Bash:
# cat /etc/pve/datacenter.cfg
crs: ha-rebalance-on-start=1,ha=static
ha: shutdown_policy=migrate
keyboard: en-us
migration: insecure,network=10.200.1.0/24
I'm not sure if the network listed for the migration traffic there also does replication traffic or not, but that entry does seem to work for migration traffic.
The (optional)
insecure
keyword there tells Proxmox to just do a direct tcp connection (unencrypted) between hosts for replication data (the memory copy part anyway), rather than using ssh. Much, much faster. Perfectly fine in a homelab, but not super suitable for a production environment.
The
ha: shutdown_policy=migrate
line might be of interest too for clustered setups. With that setting, when you tell a host box to shut down (or reboot) it automatically migrates any VMs on it to other hosts first (then does the shut down or reboot). That's kind of important, as the default setting for Proxmox will just shut down any VMs and the host instead, thereby causing an outage for anyone using those VMs (!!!).