I installed proxmox4.4 and ceph jewel, according by manual, but i got a problem with my OSDs.
If I reboot first node, OSDs on this node still online and my VMs are suspended
size/min - 2/1
pg num - 128
OSDs:
node1
- osd.0
- osd.1
node 2
- osd.3
- osd.4
node 3
- no OSDs
In debug message I found this:
UPD: On both two nodes with OSDs I got this problem. (I have total three nodes, one node without OSDs)
UPD: Ceph Config:
If I reboot first node, OSDs on this node still online and my VMs are suspended
size/min - 2/1
pg num - 128
OSDs:
node1
- osd.0
- osd.1
node 2
- osd.3
- osd.4
node 3
- no OSDs
In debug message I found this:
Code:
[ 3.391884] systemd[1]: [/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
[ 3.392414] systemd[1]: [/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue 'TasksMax' in section 'Service'
[ 3.393806] systemd[1]: [/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service'
UPD: On both two nodes with OSDs I got this problem. (I have total three nodes, one node without OSDs)
UPD: Ceph Config:
Code:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 192.168.87.0/24
filestore xattr use omap = true
fsid = 5ee20a5b-eeb2-4bcb-86a5-57cd9421c0d9
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 192.168.87.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.1]
host = node1
mon addr = 192.168.87.15:6789
[mon.2]
host = node2
mon addr = 192.168.87.13:6789
[mon.0]
host = node3
mon addr = 192.168.87.14:6789
Code:
#
begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
host node1 {
id -2 # do not change unnecessarily
# weight 1.851
alg straw hash 0 # rjenkins1
item osd.0 weight 0.926
item osd.1 weight 0.926
}
host node2 {
id -3 # do not change unnecessarily
# weight 1.851 alg straw
hash 0 # rjenkins1
item osd.2 weight 0.926
item osd.3 weight 0.926
}
root default {
id -1 # do not change unnecessarily
# weight 3.703
alg straw
hash 0 # rjenkins1
item h2 weight 1.851
item h1 weight 1.851
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default step chooseleaf firstn 0 type host
step emit
} # end crush map
Last edited: