Hello,
I have a pve cluster "A" (7.3) which has NO hyperconverged ceph. There is another pve cluster "D" (7.3) which has a lot of ceph storage. So I created one 5+3 ec pool using pveceph pool create pool_d --erasure-coding k=5,m=3 which results in a pool_d-data and a pool_d-metadata. Next I created another ec pool to be used on pve cluster "A" named "pool_a-data" and "pool_a-metadata" with the same 5+3 ec pool parameters also using pveceph.
On cluster "A" I added storage pool "pool_a" using the graphical interface which all worked fine. In between I have several VMs running on "A" with their storage on cluster "D". Works fine so far. However recently I took a look at ceph df on cluster "D" and saw that for pool "pool_a" only the metadada pool shows a usage, not the data pool which actually should most of the used storage which is strange:
As you can see for pool_d-data and pool_d-metadata ceph df reports data usage in GB.
However if you look at the data for the pool_a used by cluster "A" only the metadata pool has STORED GB data.
Is this just a display problem or is this erasure pool not used as it should?
Thanks
Rainer
I have a pve cluster "A" (7.3) which has NO hyperconverged ceph. There is another pve cluster "D" (7.3) which has a lot of ceph storage. So I created one 5+3 ec pool using pveceph pool create pool_d --erasure-coding k=5,m=3 which results in a pool_d-data and a pool_d-metadata. Next I created another ec pool to be used on pve cluster "A" named "pool_a-data" and "pool_a-metadata" with the same 5+3 ec pool parameters also using pveceph.
On cluster "A" I added storage pool "pool_a" using the graphical interface which all worked fine. In between I have several VMs running on "A" with their storage on cluster "D". Works fine so far. However recently I took a look at ceph df on cluster "D" and saw that for pool "pool_a" only the metadada pool shows a usage, not the data pool which actually should most of the used storage which is strange:
Code:
#<D>: ceph df
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 444 MiB 91 1.7 GiB 0 54 TiB
rbd 3 32 2.3 KiB 3 25 KiB 0 54 TiB
pool_d-data 6 256 95 GiB 24.47k 152 GiB 0.07 136 TiB
pool_d-metadata 7 128 143 KiB 18 530 KiB 0 72 TiB
pool_a-data 12 256 0 B 0 0 B 0 136 TiB
pool_a-metadata 13 128 13 TiB 3.34M 38 TiB 14.84 72 TiB
As you can see for pool_d-data and pool_d-metadata ceph df reports data usage in GB.
However if you look at the data for the pool_a used by cluster "A" only the metadata pool has STORED GB data.
Is this just a display problem or is this erasure pool not used as it should?
Thanks
Rainer