Separate names with a comma.
hello to all, any recommendation for sata disks, to work perfect with ceph ?
udo and wasin as ceph masters any suggestions ?
i confirm either, update 9 nodes from firefly to hammer without any problem, just in case test first demo server and after that to production
appreciate for the great help.
hello udo and wasim
wasim if i understand correctly, you have 3 nodes with 4 osds per node ?
it is necessary replica 1 than 2?
if two osd...
i wanna say a big thank you to udo and especially to wasim
the wasim help a lot to understand my wrong, the problem was the disks the default gb...
wasin check your email in 5 min.
i make all the tests via oracle virtual box
udo for your information i have the same problem, when the node1 is down freeze all
when shutdown node2, or node3 everything all right...
Hello udo thanks a lot mate for the help
i make new fresh installation so the ceph mystorage change to storage
root@demo1:~# ceph osd crush...
here the images
hello udo and wasin
below i have the commands that ask me also include two attach file from crushmap and pools...
hello udo, i post the details tommorow.
the problem focus when node1 (demo1) is down
let me explain.
prepare to create cluster nodes lets say 4...
hello again of course add it, 3 nodes demo1,demo2,demo3
pvecm create cluster for demo1 and demo2,demo3 pvecm add demo1 etc.
udo thanks a lot for useful informations,
as i make a deep investigation the results are and help a lot friends here
1. the problem is not come...
let me explain again, we have 3 nodes,
1. master demo1 > pvecm create cluster
2. demo2 > pvecm add demo1
3. demo3 > pvecm add demo1
Hello Mr wasin
please see the attache image, i think the replica is two, the others is one.
somone to have the same problem ?
for your information i follow this guide
is not make sense here,...
this is crush map
# begin crush map tunable choose_local_tries 0 tunable...
Hello to all thanks for the help, i take this commands when the master node is down.
root@demo2:~# ceph osd pool get mystorage size
the same problem with 4 nodes right now, shut down, node2 or node3 or node4 no problem
when shutdown the master node1 returns communication...