Hello!
Two migration scenarios, source and destination servers are using local zfs storage, let's call it local-zfs.
1st scenario (with preparing):
1. set replication to destination, schedule manual replication, wait until completed - only used space is copied
2. migrate - will copy disk delta, memory etc
3. delete replication (cleanup)
!!! only actual used space are copied !!!
2nd scenario (direct migration):
- space allocation (thin storage will become think, fortunately zfs is smart enough to not allocate zeroed blocks)
- whole disk copy (zfs is smart also in this step, but whole disk seems to be network transferred)
- copy memory etc.
!!! whole disk is copied, and as anyone know using lvm is far worse, because on destination thin will become think, and a guest trim is needed !!!
It is clear that for a sparse VM scenario 1 is far superior (in duration and network transfer), but why the default migration operation cannot detect the fact that source/destination is zfs and optimize the operation ?
Two migration scenarios, source and destination servers are using local zfs storage, let's call it local-zfs.
1st scenario (with preparing):
1. set replication to destination, schedule manual replication, wait until completed - only used space is copied
2. migrate - will copy disk delta, memory etc
3. delete replication (cleanup)
!!! only actual used space are copied !!!
2nd scenario (direct migration):
- space allocation (thin storage will become think, fortunately zfs is smart enough to not allocate zeroed blocks)
- whole disk copy (zfs is smart also in this step, but whole disk seems to be network transferred)
- copy memory etc.
!!! whole disk is copied, and as anyone know using lvm is far worse, because on destination thin will become think, and a guest trim is needed !!!
It is clear that for a sparse VM scenario 1 is far superior (in duration and network transfer), but why the default migration operation cannot detect the fact that source/destination is zfs and optimize the operation ?