PDM Migration Network

Jun 19, 2024
12
0
1
Just testd the Datacenter Manager to migrate from one cluster to another.
Actually it would work. But the problem is that we have the following network situation:
  • 1GBit for MGMT and Corosync subnet (NET x.x.1.0/24)
  • 2x LACP 25 GBit for data
    • including migration network (which works fine in the cluster) (NET x.x.2.0/24)
    • including second corosync
  • 2x LACP 25 GBit for storage
If we do migration within the cluster it works as expected and uses the migration network (NET x.x.2.0/24).
But when we do a migration initiated by the datacenter manager the GBit interface is used (NET x.x.1.0/24).
I've got the feeling that this is because the datacenter manger is in the same 1Gbit network as the MGMT (NET x.x.1.0/24)
 
Hi,

my guess is that on the PDM you use the cross-cluster migration?
If yes, the hostname/ips used in the remote configuration of the PDM is used for the migration, so which network is used depends on the network/dns/etc. setup
 
Yes, you're correct it's a cross-cluster migration.
And yes "for sure" the Endpoints are connected via their DNS which is pointing to te "MGMT" IP's on the NET x.x.1.0/24.
It would be nice create a possibility to use the migration or replication network or specify an additional one.
 
In our case the Migration network and storage networks are non-routed network subnets. Which makes it hard to specify them at the PDM.
So i could specify the pve hosts with these ips, but because the pdm can't reach these IPs it won't work.
Also an interesting case would be if PDM could "discover and try" via the cluster specified migration and replication network, if the hosts see each other and use the subnet if yes.
 
yes, i agree that having a more 'intelliget' way of selecting how to do the actual migration would be good, would you mind opening a feature request on our bugtracker? https://bugzilla.proxmox.com
it may involve some changes on PVE too, so not sure how/when we could do something like that