I note this as well. I think it falls under a feature request.
looking at iotop moving disks with gui
no vms or containers running
from a 4x NVME RAIDZ10 on the pci 16x bus to a 6x HDD RAIDZ10 on a 6GB/SAS
16GB max arc 512GB ddr 3
in bash
dd zero gets around 800M/s writes in 1M blocks and 128K blocks
This gives context for what might be my r720s system single tread limit or zfs pool config limit using the gui to make the zfs pools
if my best case would be 800M/s ... 6400gigbit/s
it looks like moving a vmdisk is multi threaded
with that maybe 500M/s reads/write
you also get a nice progress percentage
ok not 800M/s, but it's not slow and giving most of the potential speed
moving a lxc disk/mount point looks like it uses rysnc
I think rsync is single treaded without some fancy scripts
There I get 135M/s
you get no progress updates
I do not know lxc move is better/faster or if it caused some problems with other parts of proxmox
but maybe there is room for improving the lxc disk migrations on older hardware
https://documentation.ubuntu.com/lxd/en/latest/reference/manpages/lxc/move/
vm disk move by proxmox gui
Total DISK READ: 492.13 M/s | Total DISK WRITE: 382.21 M/s
Current DISK READ: 300.59 M/s | Current DISK WRITE: 351.89 M/s
TID PRIO USER DISK READ DISK WRITE> COMMAND 315024 be/4 root 10.66 M/s 29.61 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315015 be/4 root 14.22 M/s 26.07 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315019 be/4 root 15.40 M/s 24.78 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315018 be/4 root 17.77 M/s 24.56 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315017 be/4 root 17.77 M/s 22.84 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315022 be/4 root 20.14 M/s 22.51 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315023 be/4 root 24.88 M/s 19.53 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315021 be/0 root 14.69 M/s 15.36 M/s [zvol]
315029 be/0 root 13.48 M/s 14.39 M/s [zvol]
315020 be/0 root 15.56 M/s 14.16 M/s [zvol]
315034 be/0 root 12.28 M/s 13.64 M/s [zvol]
303953 be/0 root 11.70 M/s 13.63 M/s [zvol]
303955 be/0 root 14.29 M/s 13.00 M/s [zvol]
741 be/0 root 18.06 M/s 12.48 M/s [zvol]
315016 be/4 root 33.18 M/s 12.45 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
315027 be/0 root 25.53 M/s 11.80 M/s [zvol]
315026 be/0 root 18.62 M/s 11.53 M/s [zvol]
315030 be/0 root 16.88 M/s 11.30 M/s [zvol]
315032 be/0 root 22.08 M/s 10.66 M/s [zvol]
315256 be/0 root 20.68 M/s 10.66 M/s [zvol]
315028 be/0 root 23.08 M/s 10.55 M/s [zvol]
315031 be/0 root 15.86 M/s 9.83 M/s [zvol]
303956 be/0 root 25.56 M/s 9.58 M/s [zvol]
315033 be/0 root 20.39 M/s 8.91 M/s [zvol]
315025 be/4 root 37.92 M/s 8.29 M/s qemu-img convert -p -n -t none -T none -f raw -O raw /dev/zvol/nvme/vm-102-disk-0 zeroinit:/dev/zvol/hdd/vm-102-disk-0
lxc disk move via proxmox gui
------
Total DISK READ: 239.27 M/s | Total DISK WRITE: 130.88 M/s
Current DISK READ: 107.75 M/s | Current DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE> COMMAND 299435 be/4 root 0.00 B/s 130.88 M/s rsync --stats -X -A --numeric-ids -aH --whole-file --sparse --one-file-system --bwlimit=0 /var/lib/lxc/300/.copy-volume-2/ /var/lib/lxc/300/.copy-volume-1
1 be/4 root 0.00 B/s 0.00 B/s init
2 be/4 root 0.00 B/s 0.00 B/s [kthreadd]
3 be/0 root 0.00 B/s 0.00 B/s [rcu_gp]