I have a 3 node cluster using ceph with the following OSD configuration:
node1:
3x 400GB SSD (465GB Usable)
2x 4TB HDD (3.64GB Usable)
node2:
3x 500GB SSD (465GB Usable)
2x 4TB HDD (3.64GB Usable)
node3:
3x 500GB SSD (465GB Usable)
2x 3TB HDD (2.73GB Usable)
I have created 2 pools based on SSD and HDD classes with size 3/2. ceph -df show the following:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 20 TiB 14 TiB 5.8 TiB 5.8 TiB 28.83
ssd 4.1 TiB 2.3 TiB 1.8 TiB 1.8 TiB 43.94
TOTAL 24 TiB 17 TiB 7.5 TiB 7.6 TiB 31.40
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
ceph-hdd 3 1.9 TiB 566.34k 5.8 TiB 32.89 3.9 TiB
ceph-ssd 4 624 GiB 236.34k 1.8 TiB 50.86 589 GiB
Just looking at the SSD drives, 465GB x 9 gives 4185GB which explains the SSD Total Size of 4.1TB and the % used is 43.94%. However on the ceph-ssd pool the % used is 50.86%, and in the proxmox GUI it shows 75.61% used for the storage (1.79 TiB of 2.36 TiB)
Why is there a difference is the % used?
Also where does the 2.36TB come from in the GUI?
node1:
3x 400GB SSD (465GB Usable)
2x 4TB HDD (3.64GB Usable)
node2:
3x 500GB SSD (465GB Usable)
2x 4TB HDD (3.64GB Usable)
node3:
3x 500GB SSD (465GB Usable)
2x 3TB HDD (2.73GB Usable)
I have created 2 pools based on SSD and HDD classes with size 3/2. ceph -df show the following:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 20 TiB 14 TiB 5.8 TiB 5.8 TiB 28.83
ssd 4.1 TiB 2.3 TiB 1.8 TiB 1.8 TiB 43.94
TOTAL 24 TiB 17 TiB 7.5 TiB 7.6 TiB 31.40
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
ceph-hdd 3 1.9 TiB 566.34k 5.8 TiB 32.89 3.9 TiB
ceph-ssd 4 624 GiB 236.34k 1.8 TiB 50.86 589 GiB
Just looking at the SSD drives, 465GB x 9 gives 4185GB which explains the SSD Total Size of 4.1TB and the % used is 43.94%. However on the ceph-ssd pool the % used is 50.86%, and in the proxmox GUI it shows 75.61% used for the storage (1.79 TiB of 2.36 TiB)
Why is there a difference is the % used?
Also where does the 2.36TB come from in the GUI?