[ SOLVED ] (LOOKS LIKE A QM MOVEDISK BUG) Reclaim disk space when move virtual disk from LVM to Directory

Gilberto Ferreira

Renowned Member
Hi there...
When I move a virtual disk from LVM to Directory, the process create a qcow2 file with all space allocated. Then in order to reclaim some free space I need to fill up a file like /test.img with zeros and after that, remove the /test.img and use fstrim / to free the space. I already have enable ssd=1 and discard=1 in the disk vm configuration and also enable and install qemu-guest-agent. When you have a virtual disk with only 10GB (virtual size) ok, but if you have a disk with 800G or 1 TB than we have a problem. Is there any other method instead using dd to fill up and then use fstrim??
Thanks for any help.
 
Does anyone know of any way to recover virtual disk space when migrating from LVM to directory?
Like I have a vm with 10GB disk and migrated from lvm-thin to directory in qcow2 format.
It happens that in the migration of the disk, the file was completely filled up to 10G despite using qcow2 which at first is thin provisioning ...
So what I did is use the dd inside the vm to blow up her space like this
dd if = / dev / zero of = / test.img
And then delete the teste.img and use fstrim -v / to claim the space freed up by deleting the /test.img file.
If the virtual disk is 10G and the storage directory has say 500G beauty ... But what if the virtual disk has I don’t know 800GB!
If you are going to fill the 800GB of zeros in addition to taking a long time, you will end up bursting the space of the storage directory.
Does anyone have any other ideas there ??? Thank you.
 
Hi,

For a start, you could create test.img with fallocate -l xG test.img, where 'x' is the size (in Gb) you want to reclaim. It follows the same method you are taking now, except it's instant, because no IO is required [1].

1. https://man7.org/linux/man-pages/man1/fallocate.1.html
 
  • Like
Reactions: Gilberto Ferreira
Hi,

For a start, you could create test.img with fallocate -l xG test.img, where 'x' is the size (in Gb) you want to reclaim. It follows the same method you are taking now, except it's instant, because no IO is required [1].

1. https://man7.org/linux/man-pages/man1/fallocate.1.html

Hi
Thanks for your reply...
But create the image to reclaim the space is not the issue.
The issue is if the directory storage has no sufficient space, the move disk operation will fail even work with qcow2 format, which I suppose it's thin-provision...
Let's do an example.
My lvm-thin has 40G of space ok... Just an example. I have created a VM virtual disk with 100G of space and the OS in it just taked 5G of space.
Now I have the Directory Storage with the same 40G of space, but when I move the disk from LVM-Thin to Directory Storage using qcow2 format, the disk continue to grow up full fil 'till near 40GB and then the move disk operation stop with error.
I just looking for a way to move the virtual disk from LVM to Directory, keeping the virtual size and the real size...
 
[ SOLVED ]

UPDATE! IT'S WORK NICE WITH THE COMMAND:

qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0 /DATA/images/100/vm-100-disk-0.qcow2
proxmox01:/DATA/images/100# ls -lh
total 4.4G
-rw-r--r-- 1 root root 4.4G Sep 30 09:59 vm-100-disk-0.qcow2
proxmox01:/DATA/images/100# qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 4.35 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

Perhaps it's some kind of bug?
 
Ok! Just to be sure, I did it again...

In the LVM-Thin I have an 100.00g vm disk. Note that only about 6% are filled up.

lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 18.87g 31.33 1.86
root pve -wi-ao---- 9.75g
swap pve -wi-ao---- 4.00g
vm-100-disk-0 pve Vwi-aotz-- 100.00g data 5.91


No tried to use move_disk

cmd: qm move_disk 100 scsi0 VMS --format qcow2
(VMS is the Directory Storage)

Using this command to check the qcow2 file

cmd: watch -n 1 qemu-img info vm-100-disk-0.qcow2

Every 1.0s: qemu-img info vm-100-disk-0.qcow2 proxmox01: Wed Sep 30 11:02:02 2020

image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 21.2 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false


After a while, all space in /DATA, which is the Directory Storage are full.
df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 394M 5.8M 388M 2% /run
/dev/mapper/pve-root 9.8G 2.5G 7.4G 25% /
tmpfs 2.0G 52M 1.9G 3% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdb1 40G 40G 316K 100% /DATA
/dev/fuse 30M 16K 30M 1% /etc/pve
tmpfs 394M 0 394M 0% /run/user/0

and the image has almost 40G filled....

qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 39.9 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

And the command qm move_disk got error after a while:

qm move_disk 100 scsi0 VMS --format qcow2
create full clone of drive scsi0 (local-lvm:vm-100-disk-0)
Formatting '/DATA/images/100/vm-100-disk-0.qcow2', fmt=qcow2 cluster_size=65536 preallocation=metadata compression_type=zlib size=107374182400 lazy_refcounts=off refcount_bits=16
drive mirror is starting for drive-scsi0
drive-scsi0: transferred: 384827392 bytes remaining: 106989355008 bytes total: 107374182400 bytes progression: 0.36 % busy: 1 ready: 0
...
...
drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0
drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0
drive-scsi0: Cancelling block job
drive-scsi0: Done.
storage migration failed: mirroring error: drive-scsi0: mirroring has been cancelled

Then I tried to use qemu-img convert and everything works fine

qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0 /DATA/images/100/vm-100-disk-0.qcow2

qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 6.01 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
 
Ok! Just to be sure, I did it again...

In the LVM-Thin I have an 100.00g vm disk. Note that only about 6% are filled up.

lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 18.87g 31.33 1.86
root pve -wi-ao---- 9.75g
swap pve -wi-ao---- 4.00g
vm-100-disk-0 pve Vwi-aotz-- 100.00g data 5.91


No tried to use move_disk

cmd: qm move_disk 100 scsi0 VMS --format qcow2
(VMS is the Directory Storage)

Using this command to check the qcow2 file

cmd: watch -n 1 qemu-img info vm-100-disk-0.qcow2

Every 1.0s: qemu-img info vm-100-disk-0.qcow2 proxmox01: Wed Sep 30 11:02:02 2020

image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 21.2 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false


After a while, all space in /DATA, which is the Directory Storage are full.
df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 394M 5.8M 388M 2% /run
/dev/mapper/pve-root 9.8G 2.5G 7.4G 25% /
tmpfs 2.0G 52M 1.9G 3% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdb1 40G 40G 316K 100% /DATA
/dev/fuse 30M 16K 30M 1% /etc/pve
tmpfs 394M 0 394M 0% /run/user/0

and the image has almost 40G filled....

qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 39.9 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

And the command qm move_disk got error after a while:

qm move_disk 100 scsi0 VMS --format qcow2
create full clone of drive scsi0 (local-lvm:vm-100-disk-0)
Formatting '/DATA/images/100/vm-100-disk-0.qcow2', fmt=qcow2 cluster_size=65536 preallocation=metadata compression_type=zlib size=107374182400 lazy_refcounts=off refcount_bits=16
drive mirror is starting for drive-scsi0
drive-scsi0: transferred: 384827392 bytes remaining: 106989355008 bytes total: 107374182400 bytes progression: 0.36 % busy: 1 ready: 0
...
...
drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0
drive-scsi0: transferred: 42833281024 bytes remaining: 64541097984 bytes total: 107374379008 bytes progression: 39.89 % busy: 1 ready: 0
drive-scsi0: Cancelling block job
drive-scsi0: Done.
storage migration failed: mirroring error: drive-scsi0: mirroring has been cancelled

Then I tried to use qemu-img convert and everything works fine

qemu-img convert -O qcow2 /dev/pve/vm-100-disk-0 /DATA/images/100/vm-100-disk-0.qcow2

qemu-img info vm-100-disk-0.qcow2
image: vm-100-disk-0.qcow2
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 6.01 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false

qm move_disk is using qemu-img convert if the vm is offline. (and the target disk is always sparse)

if the vm is online., it's using qemu block job mirror. For some storage, with block job, it's no possible to have a sparse destination volume. (you need to trim it after , with discard option enable for example, and the trim option in vm agent .option)
 
Hi,
the qemu-img command should not be used for running VMs. From the man page:
Warning: Never use qemu-img to modify images in use by a running virtual machine or any other process; this may destroy the image. Also, be aware that querying an image that is being modified by another process may encounter inconsistent state.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!