Search results

  1. K

    ceph sata disks recommendations

    hello to all, any recommendation for sata disks, to work perfect with ceph ? udo and wasin as ceph masters any suggestions ? thanks
  2. K

    Ceph 0.94 (Hammer) - anyone cared to update?

    i confirm either, update 9 nodes from firefly to hammer without any problem, just in case test first demo server and after that to production
  3. K

    Ceph OSD failure causing Proxmox node to crash

    udo appreciate for the great help. many regards
  4. K

    Ceph OSD failure causing Proxmox node to crash

    hello udo and wasim wasim if i understand correctly, you have 3 nodes with 4 osds per node ? it is necessary replica 1 than 2? if two osd turn out the same time that means you loose data?? as remember well inside documentation ceph it is necessary to delete manually these disk and replace...
  5. K

    Ceph problem when master node is out

    i wanna say a big thank you to udo and especially to wasim the wasim help a lot to understand my wrong, the problem was the disks the default gb is 10gb and my demo environment i put 8gb so the ceph cant recognize. thanks alot again
  6. K

    Ceph problem when master node is out

    wasin check your email in 5 min. thanks
  7. K

    Ceph problem when master node is out

    Wasim i make all the tests via oracle virtual box
  8. K

    Ceph problem when master node is out

    udo for your information i have the same problem, when the node1 is down freeze all when shutdown node2, or node3 everything all right pfffffffffff
  9. K

    Ceph problem when master node is out

    Hello udo thanks a lot mate for the help i make new fresh installation so the ceph mystorage change to storage root@demo1:~# ceph osd crush dump -f json-pretty { "devices": [ { "id": 0, "name": "osd.0"}, { "id": 1, "name": "osd.1"}, {...
  10. K

    Ceph problem when master node is out

    hello udo and wasin below i have the commands that ask me also include two attach file from crushmap and pools //////////////////////////////////////////////////////////// # demo1 - node1 netstat -na | grep 6789 root@demo1:~# netstat -na | grep 6789 tcp 0 0...
  11. K

    Ceph problem when master node is out

    hello udo, i post the details tommorow. the problem focus when node1 (demo1) is down let me explain. prepare to create cluster nodes lets say 4 or 5 or 6 whaever total nodes 6 if down for any reason node2, or node3, etc everything is going well. right now if node 1 for some reason is down...
  12. K

    Ceph problem when master node is out

    Mr wasin hello again of course add it, 3 nodes demo1,demo2,demo3 pvecm create cluster for demo1 and demo2,demo3 pvecm add demo1 etc. it is possible to make the same demo with three nodes and ceph and verified if you have the same results when node1 (demo1> is down ? help a lot of people i...
  13. K

    Ceph problem when master node is out

    udo thanks a lot for useful informations, as i make a deep investigation the results are and help a lot friends here 1. the problem is not come from ceph storage etc. 2. the Quorum belong to the server that create the cluster, i mean pvecm create cluster 3. the other nodes to add pvecm add to...
  14. K

    Ceph problem when master node is out

    Mr wasin let me explain again, we have 3 nodes, 1. master demo1 > pvecm create cluster 2. demo2 > pvecm add demo1 3. demo3 > pvecm add demo1 to all nodes - pveceph install, pvecepf createmon etc practise 1. for some reason turn of the node 3, then the ceph storage is work ok without any...
  15. K

    Ceph problem when master node is out

    Hello Mr wasin please see the attache image, i think the replica is two, the others is one.
  16. K

    Ceph problem when master node is out

    somone to have the same problem ? for your information i follow this guide http://pve.proxmox.com/wiki/Ceph_Server is not make sense here, somebody from proxmox team to answer officially???? something do it wrong ? it is problem from proxmox cluster i don't now what should i do and if not...
  17. K

    Ceph problem when master node is out

    this is crush map ////////////////////////////////////////////////////////////////// # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2...
  18. K

    Ceph problem when master node is out

    Hello to all thanks for the help, i take this commands when the master node is down. root@demo2:~# ceph osd pool get mystorage size 2015-01-08 18:19:50.871026 7fbd2e364700 0 -- :/1028891 >> 192.168.1.201:6789/0 pipe(0x128b180 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x128b410).fault size: 2...
  19. K

    Ceph problem when master node is out

    the same problem with 4 nodes right now, shut down, node2 or node3 or node4 no problem when shutdown the master node1 returns communication failure (0) root@demo2:~# ceph health HEALTH_WARN 256 pgs degraded; 256 pgs stale; 256 pgs stuck stale; 256 pgs stuck unclean; recovery 3/6 objects...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!