Ceph and trim/discard

John.N

Member
Sep 21, 2019
24
3
23
33
Hello there,

I have found many VMs (mostly older) on my cluster (PVE 5.4, Ceph luminous) that will not free space up on RBD even after trimming.
For example, there is a VM with 400GB allocated space and rbd du shows 399 whereas df -h inside the VM shows 200GB used.
trim runs successfully, I've also got discard in the fstab, but nothing.

I googled alot and found other threads in this forum but no real solution.
Any idea?
 
SSD+Discard option set in Proxmox GUI and virtio-scsi disks are used as recommended.
No snapshots.

That's why it's driving me crazy!
 
All VMs as time passes.
Newer VMs have nearly identical rbd du with df -h.

Older VMs are reaching their max disk size in rbd du.

fstrim -va shows that it trims sucessfully.
 
Hi I have the same problem, after a year in production the thin provisioning does not work. I did fstab -av - space released but proxmox / ceph doesn't update the data.
 
Sorry

fstrim -av and use virtio-scsi disk + discard option enable.

pve - Virtual Environment 6.4-4 + Ceph Nautilis
 
  • Like
Reactions: kwinz
Last edited:
a ext4 bug (not related to ceph) has been found recently, where you can trim only once an extent.

Personnaly I'm using xfs for my vms, using ceph since 2015, 4000 vms, librbd, I never had any discard problem. (windows ntfs works fine too).
 
  • Like
Reactions: kwinz
a ext4 bug (not related to ceph) has been found recently, where you can trim only once an extent.

Personnaly I'm using xfs for my vms, using ceph since 2015, 4000 vms, librbd, I never had any discard problem. (windows ntfs works fine too).

That's good to know, thank you!

PS: Could you share a link / mailing list thread / issue number / patch or fix commit for that ext4 bug where trim is only used once on an extent? I want to make sure that my kernel is not affected.
 
Last edited:
Sorry to ressurect such an old thread, yet I ran into probably the same problem and in the end, it worked as it should, yet the VM was not in a state in which could work. The VM disk was heavily fragmented, so that in each 1 MB bucket of the rbd pool, there was a little bit of data.

In order to reclaim the free space, I needed to rearrange the data on the virtual disk and trimming then yielded the desired effect. For ext4, the procedure is very simple:
  • boot in a live linux
  • fsck the disk
  • resize2fs -M -p <ext4-partiton>
  • resize2fs -p <ext4-partition>
and then you'll reclaim much more space. This can be done for other filesystem too.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!