Playing around with 3-nodes PVE cluster with CEPH
Node1: pve-node, ceph-mon
Node2: pve-node, ceph-mon, 4 ceph osds
Node3: pve-node, ceph-mon, 4 ceph osds
in order to have 3 replicas for data pool (distribute pg between hosts and disks) I've defined the crush-map as following:
# ceph osd tree
# id weight type name up/down reweight
-1 3.68 root default
-2 1.84 host pve02A_ssd
-4 0.92 sets pve02A_ssd_set1
0 0.46 osd.0 up 1
1 0.46 osd.1 up 1
-5 0.92 sets pve02A_ssd_set2
2 0.46 osd.2 up 1
3 0.46 osd.3 up 1
-3 1.84 host pve02B_ssd
-6 0.92 sets pve02B_ssd_set1
4 0.46 osd.4 up 1
5 0.46 osd.5 up 1
-7 0.92 sets pve02B_ssd_set2
6 0.46 osd.6 up 1
7 0.46 osd.7 up 1
However, when I set number of replicas to 3 or 4 I always get health WARN status:
# ceph -s
cluster d8803d92-98dc-40b3-8f80-83e08b21e500
health HEALTH_WARN 10 pgs degraded; 64 pgs stuck unclean
monmap e9: 3 mons at {0=172.16.253.16:6789/0,1=172.16.253.15:6789/0,2=172.16.253.14:6789/0}, election epoch 18, quorum 0,1,2 2,1,0
osdmap e104: 8 osds: 8 up, 8 in
pgmap v247: 192 pgs, 3 pools, 0 bytes data, 0 objects
297 MB used, 3773 GB / 3773 GB avail
10 active+degraded
54 active+remapped
128 active+clean
Could anyone point me out what is wrong with my setup?
Tnx in advance)
Node1: pve-node, ceph-mon
Node2: pve-node, ceph-mon, 4 ceph osds
Node3: pve-node, ceph-mon, 4 ceph osds
in order to have 3 replicas for data pool (distribute pg between hosts and disks) I've defined the crush-map as following:
Code:
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
# types
type 0 osd
type 1 sets
type 2 host
type 3 rack
type 4 datacenter
type 5 region
type 6 root
# buckets
sets pve02A_ssd_set1 {
id -4 # do not change unnecessarily
# weight 0.920
alg straw
hash 0 # rjenkins1
item osd.0 weight 0.460
item osd.1 weight 0.460
}
sets pve02A_ssd_set2 {
id -5 # do not change unnecessarily
# weight 0.920
alg straw
hash 0 # rjenkins1
item osd.2 weight 0.460
item osd.3 weight 0.460
}
sets pve02B_ssd_set1 {
id -6 # do not change unnecessarily
# weight 0.920
alg straw
hash 0 # rjenkins1
item osd.4 weight 0.460
item osd.5 weight 0.460
}
sets pve02B_ssd_set2 {
id -7 # do not change unnecessarily
# weight 0.920
alg straw
hash 0 # rjenkins1
item osd.6 weight 0.460
item osd.7 weight 0.460
}
host pve02A_ssd {
id -2 # do not change unnecessarily
# weight 1.840
alg straw
hash 0 # rjenkins1
item pve02A_ssd_set1 weight 0.920
item pve02A_ssd_set2 weight 0.920
}
host pve02B_ssd {
id -3 # do not change unnecessarily
# weight 1.840
alg straw
hash 0 # rjenkins1
item pve02B_ssd_set1 weight 0.920
item pve02B_ssd_set2 weight 0.920
}
root default {
id -1 # do not change unnecessarily
# weight 3.680
alg straw
hash 0 # rjenkins1
item pve02A_ssd weight 1.840
item pve02B_ssd weight 1.840
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
# step choose firstn 2 type host
# step chooseleaf firstn -2 type sets
step emit
}
# end crush map
# ceph osd tree
# id weight type name up/down reweight
-1 3.68 root default
-2 1.84 host pve02A_ssd
-4 0.92 sets pve02A_ssd_set1
0 0.46 osd.0 up 1
1 0.46 osd.1 up 1
-5 0.92 sets pve02A_ssd_set2
2 0.46 osd.2 up 1
3 0.46 osd.3 up 1
-3 1.84 host pve02B_ssd
-6 0.92 sets pve02B_ssd_set1
4 0.46 osd.4 up 1
5 0.46 osd.5 up 1
-7 0.92 sets pve02B_ssd_set2
6 0.46 osd.6 up 1
7 0.46 osd.7 up 1
However, when I set number of replicas to 3 or 4 I always get health WARN status:
# ceph -s
cluster d8803d92-98dc-40b3-8f80-83e08b21e500
health HEALTH_WARN 10 pgs degraded; 64 pgs stuck unclean
monmap e9: 3 mons at {0=172.16.253.16:6789/0,1=172.16.253.15:6789/0,2=172.16.253.14:6789/0}, election epoch 18, quorum 0,1,2 2,1,0
osdmap e104: 8 osds: 8 up, 8 in
pgmap v247: 192 pgs, 3 pools, 0 bytes data, 0 objects
297 MB used, 3773 GB / 3773 GB avail
10 active+degraded
54 active+remapped
128 active+clean
Could anyone point me out what is wrong with my setup?
Tnx in advance)