[SOLVED] Thin Provisioning in a Ceph Cluster managed by Proxmox

bthn.szk

New Member
Jun 20, 2023
5
1
3
Hello,

I have a system landscape where I have set up a ProxmoxVE instance and an external Ceph cluster in Quorum 3. From there, I migrated two VMs, along with their disks, from an already existing oVirt KVM landscape to ProxmoxVE for testing purposes, using the CLI utilities provided by Proxmox. This was possible because both landscapes are KVM systems. So I imported the disk. See Migration QEMU/KVM. In my setup, I have an RBD pool called volumes where I store my disk images and containers. Then CephFS storage called ressources for VZDump backup file, Snippets, Container template, ISO image.

From the documentation and through research in various posts, it can be read that Ceph RBD already supports thin provisioning. However, I haven't found any guidelines or assistance regarding this.

Regarding my questions:
  1. Is it possible to explicitly enable thin provisioning on the VM disks in the Ceph cluster by setting it in the configurations of the Ceph pools?
  2. Does the ceph cluster itself automatically manages this?
  3. Is there any possibility to configure or check this setting somehow through proxmox?

Only specifications I could find out can be seen below:

Bash:
root@ceph-vm1:/# rbd du volumes/vm-101-disk-0
NAME           PROVISIONED  USED
vm-101-disk-0      120 GiB  82 GiB
root@ceph-vm1:/# rbd du volumes/vm-100-disk-0
NAME           PROVISIONED  USED
vm-100-disk-0      120 GiB  87 GiB

Any suggestion or help is welcome! Thank you very much.

Best regards,
Batuhan
 
Thin provisioning is automatically on when using RBD.
You should configure the virtual disks as "SSD" and enable discard in their settings.

Then the VM is able to run fstrim or similar tools to tell the storage layer that certain blocks are not used by the filesystem any more. This will free up space in the RBD pool.

In your case it could be that importing the disks caused thick provisioning because all the zeros were written to the new RBD image. Just run fstrim to get rid of the zeroed blocks.
 
Hello,

just like you described I reconfigured the VM settings. Then after an reboot of the machines I run fstrim and discarded the unused blocks and claimed free space around ~ 400GB back.

Thank you very much!

Best regards,
Batuhan
 
  • Like
Reactions: gurubert

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!