Is this mandatory?To follow current model migration will be done over cluster network.
Yes, 99% the migration will be made over the cluster network, but choosing the network could be an interesting feature
Is this mandatory?To follow current model migration will be done over cluster network.
That was what I ment. Both nodes must be part of the same cluster.
To follow current model migration will be done over cluster network.
yes, for a first version, I'll keep the same cluster network that migration use currently.
I think that define a migration network could be done later in a more global feature (for live migration & storage migration)
As qcow2 support extension, initially supporting just one local disk should be enough (at least for me, If I have to increase the VM disk space, I'll extend the qcow2 without adding multiple disks)
But please add a flag to choose deleting the older VM or not. DO NOT delete the older VM automatically. I had a very, very bad experience with XenServer during a live migration. The live migration was exited with an error during the starting up of the new VM (i don't know why, but XS is full of bugs) and XS has deleted the old virtual machine anyway. Hopefully the new VM was transfered properly but wasn't started on the new host.
So, DO NOT delete automatically if the "delete VM after successful migration" is set to true (please set it to false as default)
Just to stay on safe.
This will be the same migrating, so I need to delete old vm, I mean vm configuration from source node.
But I can keep disks.
The problem is how to manage old disks ? (if we want to remove them later).
Currently we can see them in storage management, but we don't have a feature yet to remove them from gui or api.
(for a qcow2 it's pretty simple to remove them manually, but for users with zfs or lvm, an api could be usefull)
I can't help you, as I'm new to proxmox, but I know this for sure: DO NOT destroy the old VM. I don't know how, but don't destroy it.
Or you have to check for EVERYTHING and catch any kind of exception that could arise that could block the migration and at the same time destroy the old VM.
As wrote, I had a very bad experience with XenServer. Don't replicate the same in PVE. Rename the VM, delete the VM configuration (but keep a backup file somewhere) or anything else. Do what you want, but do not delete that.
A small stupid uncatched bug could lead to big, big troubles.
Ho do you check that disk transfer is done properly? What happens if VM doesn't boot on the new node due to a disk corruption (but transfer was OK) and you deleted the source image ?
The drive-mirror command will fail in case of error. (network, or destination storage io error) So it should be safe.
This is exactly the same method than "move disk" option, but across the network.
I never have seen error with this method, and I have migrated more than 1000 disks this year
This will be the same migrating, so I need to delete old vm, I mean vm configuration from source node.
But I can keep disks.
The problem is how to manage old disks ? (if we want to remove them later).
Currently we can see them in storage management, but we don't have a feature yet to remove them from gui or api.
(for a qcow2 it's pretty simple to remove them manually, but for users with zfs or lvm, an api could be usefull)