ceph lxc container reclaim free space

bonkersdeluxe

Renowned Member
Jan 20, 2014
28
3
68
Hi, i have a lxc container with zize from 6 TB but its true size is 8.77TB.
pool size is 2

mount -o discard /dev/rbd0 /mnt/myrbd
fstrim -v /mnt/myrbd
/mnt/myrbd: 665.5 GiB (714574659584 bytes) trimmed

But nothing changed. Ceph used storage 8.77 TB


df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 17M 1.6G 2% /run
rpool/ROOT/pve-1 47G 1.3G 46G 3% /
tmpfs 7.8G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
rpool 46G 128K 46G 1% /rpool
rpool/ROOT 46G 128K 46G 1% /rpool/ROOT
rpool/data 46G 128K 46G 1% /rpool/data
/dev/fuse 30M 20K 30M 1% /etc/pve
/dev/sdc1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-2
/dev/sdb1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-1
/dev/sde1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-0
/dev/sdd1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-3
tmpfs 1.6G 0 1.6G 0% /run/user/0
/dev/rbd0 5.8T 3.8T 1.9T 68% /mnt/myrbd


the ceph allocated storage should be 7.79 TB.

What is the right way, to relase unused space in an lxc on a ceph rbd.
Thank you!

Sincerely Bonkersdeluxe
 
You need to trim (discard) the unused blocks from the containers filesystem, as the filesystem is the only place, where the information about un-/used blocks resides.
 
Hi Alwin
Ah ok.
per ssh on the conatiner over the root point

fstrim -v /

Thank you!

Sincerely Bonkersdeluxe