Hi,
it seems that I am running out of space on the OSDs of a five node hyperconverged Proxmox Ceph cluster:
So 4.8TiB VM disk sizes with 3.8TiB actually used. Times three as we keep 3 copies on Ceph equals to 14.4TiB/11,4TiB.
But on the rados usage it looks like this:
49TiB!!! What is going on here?!?!?
Ceph RBD trash is empty by the way...
Best regards
Rainer
it seems that I am running out of space on the OSDs of a five node hyperconverged Proxmox Ceph cluster:
Code:
root@proxmox07:~# rbd du --pool ceph-proxmox-VMs
NAME PROVISIONED USED
vm-100-disk-0 1 GiB 1 GiB
vm-100-disk-1 1 GiB 1020 MiB
vm-101-disk-0 1 GiB 1 GiB
vm-101-disk-1 1 GiB 1016 MiB
vm-102-disk-0 1 GiB 1 GiB
vm-102-disk-1 1 GiB 1020 MiB
vm-103-disk-0 1 GiB 1 GiB
...
vm-158-disk-2 32 GiB 32 GiB
vm-158-disk-3 32 GiB 32 GiB
vm-159-disk-0 128 MiB 4 MiB
vm-159-disk-1 2 GiB 2.0 GiB
vm-160-disk-0 128 MiB 4 MiB
vm-160-disk-1 2 GiB 2.0 GiB
vm-161-disk-0@2022-05-05_04:15 8 GiB 8 GiB
vm-161-disk-0@2022-05-06_04:15 8 GiB 8 GiB
vm-161-disk-0@2022-05-08_04:15 8 GiB 0 B
vm-161-disk-0@2022-05-09_04:15 8 GiB 0 B
vm-161-disk-0@2022-05-10_04:15 8 GiB 0 B
vm-161-disk-0@2022-05-11_04:15 8 GiB 0 B
vm-161-disk-0@2022-05-12_04:15 8 GiB 0 B
vm-161-disk-0@2022-05-13_04:15 8 GiB 0 B
vm-161-disk-0@2022-05-15_04:15 8 GiB 0 B
vm-161-disk-0 8 GiB 0 B
vm-161-disk-1@2022-05-05_04:15 16 GiB 4.6 GiB
vm-161-disk-1@2022-05-06_04:15 16 GiB 5.6 GiB
vm-161-disk-1@2022-05-08_04:15 16 GiB 4.5 GiB
vm-161-disk-1@2022-05-09_04:15 16 GiB 4.5 GiB
vm-161-disk-1@2022-05-10_04:15 16 GiB 3.1 GiB
vm-161-disk-1@2022-05-11_04:15 16 GiB 2.9 GiB
vm-161-disk-1@2022-05-12_04:15 16 GiB 2.4 GiB
vm-161-disk-1@2022-05-13_04:15 16 GiB 2.8 GiB
vm-161-disk-1@2022-05-15_04:15 16 GiB 4.3 GiB
vm-161-disk-1 16 GiB 3.5 GiB
vm-162-disk-0@2022-05-05_05:15 8 GiB 8 GiB
vm-162-disk-0@2022-05-06_05:15 8 GiB 8 GiB
vm-162-disk-0@2022-05-08_05:15 8 GiB 0 B
vm-162-disk-0@2022-05-09_05:15 8 GiB 0 B
vm-162-disk-0@2022-05-10_05:15 8 GiB 0 B
vm-162-disk-0@2022-05-11_05:15 8 GiB 0 B
vm-162-disk-0@2022-05-12_05:15 8 GiB 0 B
vm-162-disk-0@2022-05-13_05:15 8 GiB 0 B
vm-162-disk-0@2022-05-15_05:15 8 GiB 0 B
vm-162-disk-0 8 GiB 0 B
vm-162-disk-1@2022-05-05_05:15 16 GiB 4.3 GiB
vm-162-disk-1@2022-05-06_05:15 16 GiB 5.4 GiB
vm-162-disk-1@2022-05-08_05:15 16 GiB 4.5 GiB
vm-162-disk-1@2022-05-09_05:15 16 GiB 4.6 GiB
vm-162-disk-1@2022-05-10_05:15 16 GiB 2.9 GiB
vm-162-disk-1@2022-05-11_05:15 16 GiB 2.7 GiB
vm-162-disk-1@2022-05-12_05:15 16 GiB 2.7 GiB
vm-162-disk-1@2022-05-13_05:15 16 GiB 2.6 GiB
vm-162-disk-1@2022-05-15_05:15 16 GiB 4.2 GiB
vm-162-disk-1 16 GiB 3.5 GiB
...
vm-519-disk-1 48 GiB 48 GiB
vm-520-disk-0 16 GiB 16 GiB
vm-520-disk-1 32 GiB 32 GiB
vm-521-disk-0 16 GiB 16 GiB
vm-521-disk-1 32 GiB 32 GiB
vm-522-disk-0 16 GiB 16 GiB
vm-522-disk-1 32 GiB 32 GiB
vm-523-disk-0 16 GiB 16 GiB
vm-523-disk-1 32 GiB 32 GiB
<TOTAL> 4.8 TiB 3.8 TiB
root@proxmox07:~#
So 4.8TiB VM disk sizes with 3.8TiB actually used. Times three as we keep 3 copies on Ceph equals to 14.4TiB/11,4TiB.
But on the rados usage it looks like this:
Code:
root@proxmox07:~# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
ceph-proxmox-VMs 49 TiB 13031234 11844514 39093702 0 0 0 74678059494 1.5 PiB 51951708296 1.6 PiB 4.8 TiB 15 TiB
cephfs_data 18 TiB 72881660 29035966 218644980 0 0 0 478582658 39 TiB 196941115 6.3 TiB 11 TiB 23 TiB
cephfs_metadata 12 GiB 4305902 0 12917706 0 0 0 287980760 792 GiB 451931553 113 TiB 2.2 GiB 4.3 GiB
device_health_metrics 36 MiB 15 0 45 0 0 0 12189 92 MiB 8676 35 MiB 0 B 0 B
nfs-ganesha 5.2 MiB 35 0 105 0 0 0 292586 147 MiB 394 399 KiB 0 B 0 B
total_objects 90218846
total_used 69 TiB
total_avail 18 TiB
total_space 87 TiB
root@proxmox07:~#
49TiB!!! What is going on here?!?!?
Ceph RBD trash is empty by the way...
Code:
root@proxmox07:~# rbd trash ls ceph-proxmox-VMs
root@proxmox07:~#
Best regards
Rainer
Last edited: