hello udo and wasim
wasim if i understand correctly, you have 3 nodes with 4 osds per node ?
it is necessary replica 1 than 2?
if two osd turn out the same time that means you loose data??
as remember well inside documentation ceph it is necessary to delete manually these disk and replace...
i wanna say a big thank you to udo and especially to wasim
the wasim help a lot to understand my wrong, the problem was the disks the default gb is 10gb and my demo environment i put 8gb so the ceph cant recognize.
thanks alot again
Hello udo thanks a lot mate for the help
i make new fresh installation so the ceph mystorage change to storage
root@demo1:~# ceph osd crush dump -f json-pretty
{ "devices": [
{ "id": 0,
"name": "osd.0"},
{ "id": 1,
"name": "osd.1"},
{...
hello udo and wasin
below i have the commands that ask me also include two attach file from crushmap and pools
////////////////////////////////////////////////////////////
# demo1 - node1
netstat -na | grep 6789
root@demo1:~# netstat -na | grep 6789
tcp 0 0...
hello udo, i post the details tommorow.
the problem focus when node1 (demo1) is down
let me explain.
prepare to create cluster nodes lets say 4 or 5 or 6 whaever
total nodes 6
if down for any reason node2, or node3, etc everything is going well.
right now if node 1 for some reason is down...
Mr wasin
hello again of course add it, 3 nodes demo1,demo2,demo3
pvecm create cluster for demo1 and demo2,demo3 pvecm add demo1 etc.
it is possible to make the same demo with three nodes and ceph and verified if you have the same results when node1 (demo1> is down ?
help a lot of people i...
udo thanks a lot for useful informations,
as i make a deep investigation the results are and help a lot friends here
1. the problem is not come from ceph storage etc.
2. the Quorum belong to the server that create the cluster, i mean pvecm create cluster
3. the other nodes to add pvecm add to...
Mr wasin
let me explain again, we have 3 nodes,
1. master demo1 > pvecm create cluster
2. demo2 > pvecm add demo1
3. demo3 > pvecm add demo1
to all nodes - pveceph install, pvecepf createmon etc
practise 1.
for some reason turn of the node 3, then the ceph storage is work ok without any...
somone to have the same problem ?
for your information i follow this guide
http://pve.proxmox.com/wiki/Ceph_Server
is not make sense here, somebody from proxmox team to answer officially????
something do it wrong ? it is problem from proxmox cluster
i don't now what should i do and if not...
Hello to all thanks for the help, i take this commands when the master node is down.
root@demo2:~# ceph osd pool get mystorage size
2015-01-08 18:19:50.871026 7fbd2e364700 0 -- :/1028891 >> 192.168.1.201:6789/0 pipe(0x128b180 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x128b410).fault
size: 2...
the same problem with 4 nodes right now, shut down, node2 or node3 or node4 no problem
when shutdown the master node1 returns communication failure (0)
root@demo2:~# ceph health
HEALTH_WARN 256 pgs degraded; 256 pgs stale; 256 pgs stuck stale; 256 pgs stuck unclean; recovery 3/6 objects...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.