Search results

  1. I

    Proxmox VE Ceph Server released (beta)

    Thank you. That was one of the steps which I carefully have done. I notice even before creating the VMs, I tried to browse the storage name and the error was already there. I am reinstalling the servers for the 5th times. Will make sure I have the key ring correctly copied. The reason for...
  2. I

    Proxmox VE Ceph Server released (beta)

    Hello, Not sure what I am doing wrong here but Ceph nodes keep getting the "can't connect to cluster" message. 3 Proxmox hosts each have 3 NIC cards. vmbr0 = Eth0 = Proxmox host (assigned ip 5.5.5.10/24, gateway 5.5.5.10 vmbr1 = Eth1 = VMs (no ip assigned to subnet) vmbr2 = Eth3 = Ceph nodes...
  3. I

    Proxmox VE Ceph Server released (beta)

    I couldn't find any Mellanox cards in quantity so opted for Qlogic Infiniband cards. What a nightmare. Proxmox won't recognize and Qlogic said driver only for Redhad Linux or SUSE. I am out of luck with 14 of these.
  4. I

    Proxmox VE Ceph Server released (beta)

    Thank you symmcom. Appreciate it. Since Cephfs is still in beta and not production ready, is there an alternative? Have any one tried GlusterFS?
  5. I

    Proxmox VE Ceph Server released (beta)

    Thank you Udo. Screenshot looks great, but I couldn't install it on the Proxmox (debian host). Tried to follow the documentation but didn't work. Any pointer on how to install this would be much appreciated.
  6. I

    Proxmox VE Ceph Server released (beta)

    Hello, I would like to run the OpenVZ container on the Ceph Nodes. Is this possible. I don't see a way to do that? When I tried to create the container, ProxMox only give option for local drive. It would be great to be able to put the container on the Ceph storage.
  7. I

    Proxmox VE Ceph Server released (beta)

    Thank you Udo. Is there a way to see how much bandwidth the Ceph Nodes are using up? The only way I can think of right now is go to the switch and use the web gui bandwidth monitoring tool. It would be nice to be able to see a graphical performance bar for the Ceph nodes from ProxMox GUI.
  8. I

    Proxmox VE Ceph Server released (beta)

    I am curious. Why do we need to much bandwidth when the hard drives throughput and transfer rate can only be at 3Gb/s or 6Gb/s? I read in various forums that even high end graphic house that run these 10gb network are getting the bottle neck at hard drives or processor level. Unless we run a...
  9. I

    Proxmox VE Ceph Server released (beta)

    What about the ProxMox host? Is it advantageous to have it on the 10Gb network? In the below scenario, the VMs are communicating directly with the Ceph nodes (vm -> OSD). Is it worthwhile to have a fast network on the ProxMox host itself or is it a waste of 10gb ports? It seems the...
  10. I

    Proxmox VE Ceph Server released (beta)

    Thank you very much. Just one last question regarding this topic. May I ask for the brand and model of the switch you are using? I see both Melannox and Flextronics. Some of these go for 20gb or 40gb speed. I also wonder if the higher speed switch can auto-sense the lower speed NICs. Thanks for...
  11. I

    Proxmox VE Ceph Server released (beta)

    Can you also share the part / model number of the switch and NICs you are using in production? Thanks.
  12. I

    Proxmox VE Ceph Server released (beta)

    Thank you so much for the info. I don't see a lot of NIC cards available out there. Which NIC cards do you use (single, dual, quad ports)? Does it require any special driver for Proxmox/Debian to see the NICs? I also have a problem with third party NICs and Debian needing the proper driver for...
  13. I

    Proxmox VE Ceph Server released (beta)

    Does this also mean the read/write process will be faster if you spread the OSDs across the ceph nodes rather than packing each nodes full of OSDs? Does this also mean it's better to have 2 x 4TB than 8 x 1TB OSDs in each nodes?
  14. I

    Proxmox VE Ceph Server released (beta)

    Thanks a lot for this. Back in the 90's, I used to build and ship thousands of PCs under my own brand. For the past years, I have been using Dell. But venturing into cloud computing and Ceph, I realized their servers are limited and over priced. Your link and comment inspired me to build an...
  15. I

    Proxmox VE Ceph Server released (beta)

    Thank you. Pretty awesome chassis. But then again, even at 24 2.5" x 1TB, that's only 24 TB. I like your current chassis better. I could fit 10 x 4TB = 40TB. The price between the 1TB 2.5" and the 4TB 3.5" SATA is not that much of a different. I have been sticking to SATA largely because SAS...
  16. I

    Proxmox VE Ceph Server released (beta)

    Thank you. What if I put them on separate VLANs but on the same switch? This would mean they won't be fighting for traffic on the same ports. Technically, they will be on separate network, just still on the same switch. Will that still not recommended? I am trying to avoid having 2...
  17. I

    Proxmox VE Ceph Server released (beta)

    Thank you. I was overly excited for a moment as I thought the WD RED 2.5" also come in a 4TB version. I've noticed the WD RED doesn't list RPMs on the drive. It stated 6gb transfer but doesn't list the RPMs. I wonder why. Also, this is made for NAS and geared toward home and small businesses...
  18. I

    Proxmox VE Ceph Server released (beta)

    This will really come down to the amount of VMs you have and number of people your cluster serves. While a 10gb network will certainly increase Ceph cluster bandwidth, if your VM cluster is small you may be ok to just use single network for both Proxmox and Ceph traffic. Or use 10gb network for...
  19. I

    Proxmox VE Ceph Server released (beta)

    Thank you very much. Your suggestions helped a great deal.
  20. I

    Proxmox VE Ceph Server released (beta)

    So how do I configure inside the ProxMox control panel and the switch to do full connection over several lines as opposed to spanning? Thank you very much. My switch is a Dell 5324. So I also need to configure Link Aggregation on the switch and bonding via the ProxMox pve? I have 5 nodes so I...