Search results

  1. I

    Proxmox VE Ceph Server released (beta)

    Thank you very much. I just read through the link you've provided. So in summary, it is better to run the drives as RAID 0 and let the RAID controller use it's own write back cache? CEPH stated their software spell "the end of RAID". However, when it come to hard drive performance we still...
  2. I

    Proxmox VE Ceph Server released (beta)

    I was referring to the RAID controller itself. Is it better to run it with write back cache enabled or Ceph has its own method to handle the write back cache?
  3. I

    Proxmox VE Ceph Server released (beta)

    You are awesome! It worked. I moved the OSDs using the suggested command, then rebooted the server and the OSDs are now working well on the new server h4. How do I remove the dead ceph node? Trying to stop or remove the monitor came back with "Connection error 595: No route to host". Should...
  4. I

    Proxmox VE Ceph Server released (beta)

    root@c2:~# ceph osd tree # id weight type name up/down reweight -1 21.72 root default -2 7.24 host h1 0 1.81 osd.0 up 1 1 1.81 osd.1 up 1 2 1.81 osd.2 up 1 3...
  5. I

    Proxmox VE Ceph Server released (beta)

    Ceph stated it's software is "the end of RAID". Many of our existent Dell servers have the built-in RAID controller. But we are not using the RAID feature and set each hard drive individual as RAID 0. With this config, Ceph are able to see each drive as separate OSDs. The battery on some of...
  6. I

    Proxmox VE Ceph Server released (beta)

    Thank you so much for the quick reply. I have no problem setting up the new Ceph nodes. It's the old Ceph node that I am having a problem with. The other Ceph nodes are still seeing it as missing. It was setup as one of the monitors. It's Quorum status now indicated as "No". I've tried...
  7. I

    Proxmox VE Ceph Server released (beta)

    Hello, The boot drive to one of my ceph nodes (1 out of 3) just died. I understand the failed node will need to be removed entirely. However, I can't seem to be able to remove the node? All documentation I seems so far only give example to replacement of failed OSDs. But what about replacement...
  8. I

    Proxmox VE Ceph Server released (beta)

    Thank you. So to connect 3 ceph nodes I need to get the dual ports 40GB for each server and daisy chain them in sequence (server A -> Server -> Server B -> Server C)? I am still confused about the switch-less technology. Tried looking up on Mellanox website about Virtual Protocol...
  9. I

    Proxmox VE Ceph Server released (beta)

    Thank you very much for sharing. I don't mean to dwell on this thread, but since we are on the topic of need for speed... I am confused. You ran 10GB Ethernet on Ceph nodes. How were you able to achieve 15.7GBytes as indicated on your iperf? How do you connect the 40GB cards without going...
  10. I

    Proxmox VE Ceph Server released (beta)

    I tried to find info on rdma with Ceph but it's very vague. What exactly is RDMA and how will it help Ceph to improve speed? Thanks.
  11. I

    Proxmox VE Ceph Server released (beta)

    I am drooling while looking at the result of your iperf. Man, that is some serious speed for the Ceph nodes. An IT guy's wet dreams. May I ask for the brand/model of the: 1) Switches and NIC cards you are using? 2) Cables type (I think Infiniband CX4 cables only support up to 10Gbps) If you...
  12. I

    Proxmox VE Ceph Server released (beta)

    Proxmox is using RHEL kernel. It was stated in the forum that any device support RHEL would be compatible with Proxmox. When I tried to install a QLOGIC NIC card driver, it gave me an error: "Kernel version 2.6.32-29-pve Binary rpm not found for the above adapters" This mean it's recognizing...
  13. I

    Proxmox VE Ceph Server released (beta)

    Has anyone successfully ran an Infiniband network of 20gbps or 40gbps for the CEPH nodes? If so, I would greatly appreciate feedback on what servers (mother boards/processors) you are using that is capable of pushing such high speed? Thank you.
  14. I

    Proxmox VE Ceph Server released (beta)

    Thank you. Do I need to do anything to switch to the Redhat Kernel? Tried to install a Qlogic adapter card with the driver provided by manufacturer. Got the error with pve kernel and not finding Redhat rpm. "Kernel version 2.6.32-29-pve Binary rpm not found for the above adapters"
  15. I

    Proxmox VE Ceph Server released (beta)

    Can we still use this wiki doc for setting up High Availability of the VMs on CEPH nodes? https://pve.proxmox.com/wiki/High_Availability_Cluster
  16. I

    Proxmox VE Ceph Server released (beta)

    Hello, What is the linus kernel Proxmox VE Ceph Server released (beta) is on? Is it RHEL 6.4? I am installing a set of new QLogic inifiniband NICs for the Ceph nodes. However, Proxmox is not recognizing the card. QLogic support is stating they do not have drivers for Debian. Only for RHEL. I...
  17. I

    Proxmox VE Ceph Server released (beta)

    3 MONs created. Ceph subnet on different switch. Also tried on the same switch under separated VLAN. 1GB LAN. I already reloaded so the content looks different now. I will need to reloaded it again to simulate the error one more time. It is working like a charm under the same subnet as the...
  18. I

    Proxmox VE Ceph Server released (beta)

    Thank you. I already did the reinstall before seeing your message. But I can easily simulate that environment again. The strange thing is if I use the subnet same at the proxmox hosts, it works like a charm. Yes, I was able to ping all nodes on the Ceph subnet. For the life of me, I could not...
  19. I

    Proxmox VE Ceph Server released (beta)

    root@prox2:~# ceph -s 2014-06-25 09:40:24.597315 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 p ipe(0xc990d0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0xc99340).fault 2014-06-25 09:40:27.597304 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 p ipe(0xc9c9e0 sd=4 :0 s=1 pgs=0 cs=0 l=1...