Using "Disk move" via the GUI to migrate disks to a replacement storage array. The resultant "new" qcow disk are seemingly not sparse. I tried it with the VM running and with VM stopped. I saw in a post from 2020 result may be different if VM was shutdown.
I used both du and qemu-img info to check actual space util.
I'm still running PVE 6.X in this cluster if it makes a difference.
If all my qcow become non sparse the replacement array will not have enough capacity.
I'm not having luck finding and answer in prior forum.
I'd appreciate and advice or guidance.
I used both du and qemu-img info to check actual space util.
I'm still running PVE 6.X in this cluster if it makes a difference.
If all my qcow become non sparse the replacement array will not have enough capacity.
I'm not having luck finding and answer in prior forum.
I'd appreciate and advice or guidance.
image: ..../images/101/vm-101-disk-1.qcow2 | ||
file format: qcow2 | ||
virtual size: 60 GiB (64424509440 bytes) | ||
disk size: 17.4 GiB | ||
cluster_size: 65536 | ||
Format specific information: | ||
compat: 1.1 | ||
compression type: zlib | ||
lazy refcounts: false | ||
refcount bits: 16 | ||
corrupt: false | ||
extended l2: false | ||
image: .../101/vm-101-disk-1.qcow2 | ||
file format: qcow2 | ||
virtual size: 60 GiB (64424509440 bytes) | ||
disk size: 59.1 GiB | ||
cluster_size: 65536 | ||
Format specific information: | ||
compat: 1.1 | ||
compression type: zlib | ||
lazy refcounts: false | ||
refcount bits: 16 | ||
corrupt: false | ||
extended l2: false |