I have 4 hosts with 12 drives each
I used the command: pveceph pool create testpool2 --erasure-coding k=4,m=2
The results are not as I expeted
It creates a testpool2-metadata 3/2 32pg's, 32 optimal pg's, autoscale mode warn. This seems fine.
It creates a testpool2-data 6/5 128pg's, 32 optimal pg's, autoscale mode warn
I believe this means I got my k=4,m=2 but will stop early so I change to 6/4
I get the errors
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs undersized
1 pools have too many placement groups
I don't know if this is ok or not
I've tried to change the number of pg's to 32,it completes but doesn't change the value
What I am trying to do is create an Erasure Coded Pool k=4,m=2
I want the pool to stay operational if 2 osd's are lost
I want the failure domain to be host
I want no more than 2 pieces being on any one host (I think this is automatic)
I need the correct syntax for making the pool
and if I need to make a pool and then make changes in a file I need the location and file name to edit and the correct syntax for that file.
Am I missing anything ?
The way I think ceph works is that when saving a file it will be broken into 4 data chunks and 2 parity chunks will be calculated. Then 2 chunks will be placed on a host, another 2 chunks will be placed on another host and 1 chunk will be placed on each of the remaining 2 hosts. If I loose a drive with a chunk on it ceph will start to replicate that data on the same or different host. I can flag a drive or host prior to doing maintenance that will temporally pause replication.
I used the command: pveceph pool create testpool2 --erasure-coding k=4,m=2
The results are not as I expeted
It creates a testpool2-metadata 3/2 32pg's, 32 optimal pg's, autoscale mode warn. This seems fine.
It creates a testpool2-data 6/5 128pg's, 32 optimal pg's, autoscale mode warn
I believe this means I got my k=4,m=2 but will stop early so I change to 6/4
I get the errors
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs undersized
1 pools have too many placement groups
I don't know if this is ok or not
I've tried to change the number of pg's to 32,it completes but doesn't change the value
What I am trying to do is create an Erasure Coded Pool k=4,m=2
I want the pool to stay operational if 2 osd's are lost
I want the failure domain to be host
I want no more than 2 pieces being on any one host (I think this is automatic)
I need the correct syntax for making the pool
and if I need to make a pool and then make changes in a file I need the location and file name to edit and the correct syntax for that file.
Am I missing anything ?
The way I think ceph works is that when saving a file it will be broken into 4 data chunks and 2 parity chunks will be calculated. Then 2 chunks will be placed on a host, another 2 chunks will be placed on another host and 1 chunk will be placed on each of the remaining 2 hosts. If I loose a drive with a chunk on it ceph will start to replicate that data on the same or different host. I can flag a drive or host prior to doing maintenance that will temporally pause replication.