[SOLVED] vDisk "move" command skips over "thin provisioning" on Ceph_Pools ??

Q-wulf

Well-Known Member
Mar 3, 2013
613
38
48
my test location
So I just noticed something I never noticed before and I am not sure wether it is a bug or not.

I have 2 ceph pools.
Pool Ceph_A and Ceph_B for simplicities sake. They are both Erasure Coded pools with a cache pool infront of it (afaik mandatory for EC-pools)

Now because i had issues deleting a different vDisk (Known issue: "image still has watchers") i decided to move all Disks off Ceph_A and onto Ceph_B that i wanted to keep.

On Ceph_A i have setup a vDisk using SCSI (Discard=on) using virtio (scsi controller type). size is 4096 GB. 66 GB are utilized. i used the gui to move said vDisk to Ceph_B pool and now i have a 4096GB utilisation on said pool. The thin provisioning seems to have gone. Or in other words, my 66GB image turned into a 4096 GB image via the "move disk" command on GUI.

Is that working as expected, or is this a bug ?
 
They are missing qemu feature for rbd block driver, for drive mirroring.
(It's copy all block, including zeros)

As you use discard=on, you can do a trim inside your guest vm to free space.
(you can use "fstrim" command fo example.
 
so, this is "working as intended" then.


Its a usecase, that never comes up in our production use.
It just came up on my private nodes, so i thought i mention it.

Thanks @spirit
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!