Good evening guys,
I'm installing a proxmox with ceph using an Oracle ZFS storage, I would like to know if it's functional, because I'm doing the tests and I'm not getting success in the ceph configuration, I have 2 blades pointed to the storage, I set up a cluster, and put 4 OSDs on each blade. My problem is when I go to configure the pool, because the disks start to stop, it's not a physical problem, but I get the message that the PGs are inactive, I don't know if I'm configuring something wrong with that. If you can help me.
Follow my ceph settings:
ceph -s
root@odaftc01:~# ceph -s
cluster:
id: 213b00ac-cba6-41a6-bb1b-da2423ccd40f
health: HEALTH_WARN
3 osds down
1 host (3 osds) down
Reduced data availability: 128 pgs inactive
54 slow ops, oldest one blocked for 1784 sec, daemo
ns [mon.odaftc01,mon.odaftc02] have slow ops.
services:
mon: 2 daemons, quorum odaftc01,odaftc02 (age 95m)
mgr: odaftc02(active, since 95m), standbys: odaftc01
osd: 6 osds: 1 up (since 3h), 4 in (since 3h)
data:
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
128 unknown
I'm installing a proxmox with ceph using an Oracle ZFS storage, I would like to know if it's functional, because I'm doing the tests and I'm not getting success in the ceph configuration, I have 2 blades pointed to the storage, I set up a cluster, and put 4 OSDs on each blade. My problem is when I go to configure the pool, because the disks start to stop, it's not a physical problem, but I get the message that the PGs are inactive, I don't know if I'm configuring something wrong with that. If you can help me.
Follow my ceph settings:
ceph -s
root@odaftc01:~# ceph -s
cluster:
id: 213b00ac-cba6-41a6-bb1b-da2423ccd40f
health: HEALTH_WARN
3 osds down
1 host (3 osds) down
Reduced data availability: 128 pgs inactive
54 slow ops, oldest one blocked for 1784 sec, daemo
ns [mon.odaftc01,mon.odaftc02] have slow ops.
services:
mon: 2 daemons, quorum odaftc01,odaftc02 (age 95m)
mgr: odaftc02(active, since 95m), standbys: odaftc01
osd: 6 osds: 1 up (since 3h), 4 in (since 3h)
data:
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
128 unknown