Proxmox 9 in a 3 node cluster with CEPH VM migration use VMBR0 network? how to get faster VM migration speed?

aluisell

Active Member
May 30, 2019
14
2
43
54
Hi all,

is it correct that by default Proxmox 9 in a 3 node cluster configuration with CEPH for VM migration use VMBR0 network?
Current node network config does use a separate network for CEPH Cluster (10.10.10.x/24), CEPH Public (10.10.20.x/24), COROSYNC (172.16.1.x/24) and VMBR0 (192.168.1.x/24), (4 different nic card all them at 10GBe)? Cluster main network (link 0) is Corosync and additional link (link 1) has been assigned to VMBR0.

I have also tried the same on a very similar configuration but using 40GB nic for CEPH networks and got same results. I have forced the migration network setting to use either CEPH public or ceph cluster NIC but speed didn't improve, still remain close to 1GB/s. How can I get faster VM migration speed?

Thanks

Andrea
 
You can use either the GUI Datacenter option or the edit /etc/pve/datacenter.cfg and change the migration network, ie, migration: type=insecure,network=169.254.1.0/24

Obviously, you need a migration network for this to work. Also, if this is an isolated network, can use the insecure option for faster migrations.

I use isolated switches and all Ceph public, private, and Corosync traffic use the 169.254.1.0/24 network. Considered best practice? No. Does it work? Yes. Another reason for using a 169.254.0.0/16 network is because it's an IPv4 link-local address so it's not going anywhere except locally.