Hi,
I have 2 nodes came with 6 OSD per node, i have follow this guide: https://pve.proxmox.com/wiki/Ceph_Server to create CEPH cluster. However after create OSD from web guide, it show 4/8 OSD out.
I tried create a pool on CEPH and copy data to this. The storage cluster said degrade.
I have delete this pool and storage health is ok again.
I have navigate to Disks on per node and see Usage column is partitions, after search i foudn this thread. I have follow instruction: zap disk and create OSD again. Then it showing 12 in and 4 out, total 16 disks (actually we have 12 disks only).
May i know how to fix this error? Is it due to 2 mon on 2 nodes (not stable cluster quorum)?
Thanks,
I have 2 nodes came with 6 OSD per node, i have follow this guide: https://pve.proxmox.com/wiki/Ceph_Server to create CEPH cluster. However after create OSD from web guide, it show 4/8 OSD out.
I tried create a pool on CEPH and copy data to this. The storage cluster said degrade.
Code:
2018-05-28 01:53:43.980574 mon.hl101 mon.0 10.40.10.1:6789/0 715 : cluster [WRN] Health check failed: Degraded data redundancy: 256 pgs undersized (PG_DEGRADED)
2018-05-28 01:53:58.111651 mon.hl101 mon.0 10.40.10.1:6789/0 726 : cluster [WRN] Health check update: Degraded data redundancy: 1/3 objects degraded (33.333%), 1 pg degraded, 256 pgs undersized (PG_DEGRADED)
2018-05-28 01:54:04.184970 mon.hl101 mon.0 10.40.10.1:6789/0 729 : cluster [WRN] Health check update: Degraded data redundancy: 143/429 objects degraded (33.333%), 96 pgs degraded, 256 pgs undersized (PG_DEGRADED)
2018-05-28 01:54:09.340770 mon.hl101 mon.0 10.40.10.1:6789/0 730 : cluster [WRN] Health check update: Degraded data redundancy: 246/738 objects degraded (33.333%), 146 pgs degraded, 256 pgs undersized (PG_DEGRADED)
I have delete this pool and storage health is ok again.
I have navigate to Disks on per node and see Usage column is partitions, after search i foudn this thread. I have follow instruction: zap disk and create OSD again. Then it showing 12 in and 4 out, total 16 disks (actually we have 12 disks only).
May i know how to fix this error? Is it due to 2 mon on 2 nodes (not stable cluster quorum)?
Thanks,