i would create a VM as it is required. Same disk size as the transferred raw source.
Then i would zfs remove rpool/data/vm-lxc-orwhatherver-vm-ID-disk-ID, to delete it's disk.
The I would replace the disk with zfs send and receive or zfs rename rpool/sync/vm-103-disk-0...
i wonder what solutions do / would you use to create a public cloud with ProxMox?
PMs HTTP GUI is not suitable for public users, because it reveals too much info about the cluster even with the most basic permissions.
There is also no automatic provisioning, payment processing...
Just a note in case of a confusion for future readers. :-)
Looks like MH_MUC revived an old thread.
In original issue we have had proxmox <= 5 and there is no efi boot with ZFS, and instructions still hold true.
Latter issue looks like from PM 6 where we can have EFI boot and ZFS, hence the new...
I just took 5 minutes and wrote this, as there are no nagios plugins existing for monitoring pve-zsync jobs.
Haven't really tested it yet, just sent it to my coworker, but I guess it should work as expected.
Feel free to make it more advanced, share your mods back or just use as is...
there is also an option to "cheat" with this implementation and migrate suspended VMs.
In this case, all you need to fix is locking mechanism. See here: https://bugzilla.proxmox.com/show_bug.cgi?id=2252
As a bonus, we also get to keep ZFS snapshots on migration!
Seems that you are correct in the case i checked:
root@p32:~# ls -la /etc/ssh/ | grep -i known
-rw------- 1 root root 6601 Oct 29 17:54 ssh_known_hosts
lrwxrwxrwx 1 root root 25 Oct 29 17:25 ssh_known_hosts.old -> /etc/pve/priv/known_hosts
I guess I can just rm ssh_known_hosts and...
1) Qemu contains no function for dirty bit map for delta sync?
2) If you set up replication beforehand, this is exactly what happens on offline migration.
3) See: https://bugzilla.proxmox.com/show_bug.cgi?id=2252 .