On Proxmox 6.2, is anyone else experiencing an the issue with LVM-Thin or ZFS where when you migrate a guest from node to node with local disks, it first fully allocates the destination before actually copying the data from the source? Also, the destination server IO is maxed out and the guests freeze/are unable to access their storage while it does it's allocation? I've found that it doesn't matter if the guest is using IDE, SATA, VirtIO or SCSI.
2020-06-26 06:11:56 use dedicated network address for sending migration traffic (10.50.40.241)
2020-06-26 06:11:56 starting migration of VM 204 to node 'prox14' (10.50.40.241)
2020-06-26 06:11:57 found local disk 'local-lvm:vm-204-disk-0' (in current VM config)
2020-06-26 06:11:57 copying local disk images
2020-06-26 06:11:57 starting VM 204 on remote node 'prox14'
2020-06-26 06:12:01 start remote tunnel
2020-06-26 06:12:02 ssh tunnel ver 1
2020-06-26 06:12:02 starting storage migration
2020-06-26 06:12:02 scsi0: start migration to nbd:unix:/run/qemu-server/204_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0 with bandwidth limit: 31457 KB/s
((Here it consumes all available disk IO on the destination node. After 5-20 minutes, depending on the size of the disk, the migration continues like normal.))
drive-scsi0: transferred: 0 bytes remaining: 107374182400 bytes total: 107374182400 bytes progression: 0.00 % busy: 0 ready: 0
drive-scsi0: transferred: 33554432 bytes remaining: 107340627968 bytes total: 107374182400 bytes progression: 0.03 % busy: 0 ready: 0
drive-scsi0: transferred: 67108864 bytes remaining: 107307073536 bytes total: 107374182400 bytes progression: 0.06 % busy: 0 ready: 0
drive-scsi0: transferred: 100663296 bytes remaining: 107273519104 bytes total: 107374182400 bytes progression: 0.09 % busy: 0 ready: 0
(((truncated)))
2020-06-26 07:11:49 migration status: completed
drive-scsi0: transferred: 107416059904 bytes remaining: 0 bytes total: 107416059904 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-scsi0: Completing block job...
drive-scsi0: Completed successfully.
drive-scsi0 : finished
2020-06-26 07:11:50 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=prox14' root@10.50.40.241 pvesr set-state 204 \''{}'\'
2020-06-26 07:11:52 stopping NBD storage migration server on target.
Logical volume "vm-204-disk-0" successfully removed
2020-06-26 07:12:10 migration finished successfully (duration 01:00:14)
TASK OK
Here is iotop on a LVM-Thin before the drive starts transferring.
896 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17899 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17902 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17895 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17854 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17893 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17903 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17900 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17906 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17904 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17892 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17898 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17897 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17901 be/4 root 0.00 B/s 3.68 M/s 0.00 % 81.87 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17894 be/4 root 0.00 B/s 3.68 M/s 0.00 % 81.78 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17905 be/4 root 0.00 B/s 3.68 M/s 0.00 % 81.68 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
8637 be/4 root 104.56 K/s 0.00 B/s 0.00 % 9.98 % [kworker/u96:2+dm-thin]
*Edited: for Clarity
2020-06-26 06:11:56 use dedicated network address for sending migration traffic (10.50.40.241)
2020-06-26 06:11:56 starting migration of VM 204 to node 'prox14' (10.50.40.241)
2020-06-26 06:11:57 found local disk 'local-lvm:vm-204-disk-0' (in current VM config)
2020-06-26 06:11:57 copying local disk images
2020-06-26 06:11:57 starting VM 204 on remote node 'prox14'
2020-06-26 06:12:01 start remote tunnel
2020-06-26 06:12:02 ssh tunnel ver 1
2020-06-26 06:12:02 starting storage migration
2020-06-26 06:12:02 scsi0: start migration to nbd:unix:/run/qemu-server/204_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0 with bandwidth limit: 31457 KB/s
((Here it consumes all available disk IO on the destination node. After 5-20 minutes, depending on the size of the disk, the migration continues like normal.))
drive-scsi0: transferred: 0 bytes remaining: 107374182400 bytes total: 107374182400 bytes progression: 0.00 % busy: 0 ready: 0
drive-scsi0: transferred: 33554432 bytes remaining: 107340627968 bytes total: 107374182400 bytes progression: 0.03 % busy: 0 ready: 0
drive-scsi0: transferred: 67108864 bytes remaining: 107307073536 bytes total: 107374182400 bytes progression: 0.06 % busy: 0 ready: 0
drive-scsi0: transferred: 100663296 bytes remaining: 107273519104 bytes total: 107374182400 bytes progression: 0.09 % busy: 0 ready: 0
(((truncated)))
2020-06-26 07:11:49 migration status: completed
drive-scsi0: transferred: 107416059904 bytes remaining: 0 bytes total: 107416059904 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-scsi0: Completing block job...
drive-scsi0: Completed successfully.
drive-scsi0 : finished
2020-06-26 07:11:50 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=prox14' root@10.50.40.241 pvesr set-state 204 \''{}'\'
2020-06-26 07:11:52 stopping NBD storage migration server on target.
Logical volume "vm-204-disk-0" successfully removed
2020-06-26 07:12:10 migration finished successfully (duration 01:00:14)
TASK OK
Here is iotop on a LVM-Thin before the drive starts transferring.
896 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17899 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17902 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17895 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17854 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17893 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17903 be/4 root 0.00 B/s 4.90 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17900 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17906 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17904 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17892 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17898 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17897 be/4 root 0.00 B/s 4.68 M/s 0.00 % 99.99 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17901 be/4 root 0.00 B/s 3.68 M/s 0.00 % 81.87 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17894 be/4 root 0.00 B/s 3.68 M/s 0.00 % 81.78 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
17905 be/4 root 0.00 B/s 3.68 M/s 0.00 % 81.68 % kvm -id 182 -name App-Test -chardev socket,id=qmp,p~st_tick_policy=discard -incoming unix:/run/qemu-server/182.migrate -S
8637 be/4 root 104.56 K/s 0.00 B/s 0.00 % 9.98 % [kworker/u96:2+dm-thin]
*Edited: for Clarity