Ok. Clear. I will play with compression and have it migrated again, see how much time it will take. Maybe this might be enough after compression to enable a 3/2 size value. Thank you itNGO :) hope to have more OSD's for the single OSD hosts soon.
And is having it set to 1 give me dataloss on the long term?(Not counting disk failures) Because this message is a bit unclear (proxmox-ceph-docs):
Do not set a min_size of 1. A replicated pool with min_size of 1 allows I/O on an object when it has only 1 replica, which could lead to data...
I did not enable compression unfortunately. I have to migrate them again.. Fun ;). Is this guide and the first 2 commands sufficient to enable it? https://docs.ceph.com/en/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
ceph osd pool set <pool-name> compression_algorithm...
Hello itNGO,
I was expecting that. 1 host still has 1 big VM on it that needs to be migrated either to Ceph or another storage, but I wanted +1TB for it to not come into trouble adding the last OSD. I think it will exceed the 85% mark on the near full warning message.
1 host will be fitted...
Hello Community,
I've the following Ceph question about PG's and OSD capacity:
As you can see. The Optimal number of PG's for my main Pool (Ceph-SSD-Pool-0) is higher than the actual PG count of 193. Autoscale is not working as far as I can see then. There are no Target settings set yet...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.