So I just noticed something I never noticed before and I am not sure wether it is a bug or not.
I have 2 ceph pools.
Pool Ceph_A and Ceph_B for simplicities sake. They are both Erasure Coded pools with a cache pool infront of it (afaik mandatory for EC-pools)
Now because i had issues deleting a different vDisk (Known issue: "image still has watchers") i decided to move all Disks off Ceph_A and onto Ceph_B that i wanted to keep.
On Ceph_A i have setup a vDisk using SCSI (Discard=on) using virtio (scsi controller type). size is 4096 GB. 66 GB are utilized. i used the gui to move said vDisk to Ceph_B pool and now i have a 4096GB utilisation on said pool. The thin provisioning seems to have gone. Or in other words, my 66GB image turned into a 4096 GB image via the "move disk" command on GUI.
Is that working as expected, or is this a bug ?
I have 2 ceph pools.
Pool Ceph_A and Ceph_B for simplicities sake. They are both Erasure Coded pools with a cache pool infront of it (afaik mandatory for EC-pools)
Now because i had issues deleting a different vDisk (Known issue: "image still has watchers") i decided to move all Disks off Ceph_A and onto Ceph_B that i wanted to keep.
On Ceph_A i have setup a vDisk using SCSI (Discard=on) using virtio (scsi controller type). size is 4096 GB. 66 GB are utilized. i used the gui to move said vDisk to Ceph_B pool and now i have a 4096GB utilisation on said pool. The thin provisioning seems to have gone. Or in other words, my 66GB image turned into a 4096 GB image via the "move disk" command on GUI.
Is that working as expected, or is this a bug ?