drive-mirror RBD thin

So have done a few tests, and seems VM on or off the full image is migrated, from what i have heard its possible with CEPH to ignore "whitespace" when doing a RBD copy/move.

1/ If it's an empty RBD disk with no file-system it remains 0.

2/ If it's an RBD disk with a file-system it will take up the full amount of allocated space, compared to the original RBD sparse file.
 
Last edited:
So have done a few tests, and seems VM on or off the full image is migrated, from what i have heard its possible with CEPH to ignore "whitespace" when doing a RBD copy/move.

1/ If it's an empty RBD disk with no file-system it remains 0.

2/ If it's an RBD disk with a file-system it will take up the full amount of allocated space, compared to the original RBD sparse file.
Hi,
how do you defined "whitespace"?
Ceph rbd-files used only the data, which is written to the disk (blocks, which are used will be created/rewritten).
An move of an rbd-file should move the used space only.

If blocks are overwritten with zeros they are still in use. Except you enable discard (use an scsi-controller (virtio)) and trim/discard the data inside the OS.

Udo
 
Hi,
how do you defined "whitespace"?
Ceph rbd-files used only the data, which is written to the disk (blocks, which are used will be created/rewritten).
An move of an rbd-file should move the used space only.

If blocks are overwritten with zeros they are still in use. Except you enable discard (use an scsi-controller (virtio)) and trim/discard the data inside the OS.

Udo


Correct, so a RBD disk which has a size of 200GB but for example only has a use of 50GB before a disk move will show as only using 50GB within CEPH "rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'", doing a disk move the progress bar will show it "moving" the full 200GB and after running "rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'" against the new RBD disk will show it using 200GB.

According to the original link I sent Proxmox is writing the 0 block's during the disk move so CEPH is still taking up space, however as you state can run a trim/discard after within the VM to recover the extra 150GB, but as the disk to move gets bigger it increases the time required for all operations.
 
Correct, so a RBD disk which has a size of 200GB but for example only has a use of 50GB before a disk move will show as only using 50GB within CEPH "rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'", doing a disk move the progress bar will show it "moving" the full 200GB and after running "rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'" against the new RBD disk will show it using 200GB.

According to the original link I sent Proxmox is writing the 0 block's during the disk move so CEPH is still taking up space, however as you state can run a trim/discard after within the VM to recover the extra 150GB, but as the disk to move gets bigger it increases the time required for all operations.
Hi Ashley,
that sounds like a bug!

Udo
 
Hi Ashley,
that sounds like a bug!

Udo

Its the same issue as the post I linked from 2014, so wanted to check if that had since been resolved and this is a bug or if it was still the same as the 2014 post.

Happy to log a bug if that's what is required, as I said if its an empty RBD image with no file-system it is fine, however straight away it has any data within the disk it causes a full write of the RBD size.
 
It's a limitation in qemu rbd block driver.

lack of implementation of write zeroes, and another thing, don't remember exactly.

only way to reduce space is to use discard/fstrim after the mirroring.
 
It's a limitation in qemu rbd block driver.

lack of implementation of write zeroes, and another thing, don't remember exactly.

only way to reduce space is to use discard/fstrim after the mirroring.

Thought so, thanks for confirming.
 
Its the same issue as the post I linked from 2014, so wanted to check if that had since been resolved and this is a bug or if it was still the same as the 2014 post.

Happy to log a bug if that's what is required, as I said if its an empty RBD image with no file-system it is fine, however straight away it has any data within the disk it causes a full write of the RBD size.
Hello, I also encountered the same problem in 2023, this problem is in the latest qemu (8.1.3 Nov 21st 2023) doesn't seem to have been fixed yet.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!