Search results

  1. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Just some other thoughts after few very quick raw tests. - Gluster is simpler but tedious to configure. Format all bricks mount them all and then create a gluster above it, while Ceph is much more assisted on proxmox platform during the installation. - On this 3 node configuration Gluster if...
  2. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Ok guys, I miss the part with mesh that is not recommended with only 3 nodes... I will go with Ceph at 90%. I will give a try also to Gluster, just for fun :D
  3. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    SSD only for journaling... But I don't know if it's a requiremend or just an advice in my little 3node environment
  4. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    So in 3 node scenario Ceph is really not recommended? Tnx!
  5. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Sure I will use LACP... For the time being I have only one switch and is clearly a single point of failure, i know that... A second switch however is on the way... How do I will connect 3 doble nic on 2 switch?
  6. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    SAS disks are obviously configured as JBOD, direct attached mode, no raid whatsoever. I purchased 3 Perc H200 on purpose. I'm little bit concerned about the fact that almost everyone said that Ceph on only 3 nodes is not performed well as Gluster. Anyway, I do a test witch Ceph using all Fiber...
  7. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Hi, A quick update on my little project. I almost completed the 3 node configuration. Now I have the 3 Dell R610 with 96GB Ram each, double NIC SFP 10GB card each and storage configured in this way: - 5 disk SAS 10K 300GB on each node - 1 disk SAS 10K 146GB for proxmox Os on each node I will...
  8. A

    Help--- LXC container problem

    Tnx Alex, 'i'e managed to kill the process and restart the server. All seems back to normal. I need to delete the LXC container ...
  9. A

    Help--- LXC container problem

    All LXC commant does'nt work. ALSO, even reboot from terminal wait and hang out...
  10. A

    Help--- LXC container problem

    C'mon: It's Linux PAM Standard Authentication... The problem isn't so simple.
  11. A

    Help--- LXC container problem

    Hi, the root passwd is ok. But logging in web interface is still impossible. I remember a similar issue involving LXC months ago, don't remember how I solved :(
  12. A

    Help--- LXC container problem

    Hi I have one LXC container hang out. Proxmox web interface tell me login failed. Every LXC related command seems not work KVM is working. How can I regain access to my server without reboot it? It's a single node installation.... Tnx!
  13. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Have you tried to recover from from a node fail without problems? I try to, but with some issues...
  14. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Tnx alexskysilk. The starting point is 3 node minimum but I don't exclude that we will increase the nodes from 3 to 4 or 5... In time... It depends. I will try both systems but at some point I need to decide what is the right direction... The cost of a 10g card is very affordable, what...
  15. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Tnx Roman... How about network cards? It's mandatory to upgrade from 1GB to 10Gb?
  16. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    I will definetively do. Do you think that my hardware is good enough for a starting point or I will think to upgrade some components right now, before I start?
  17. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    I never use Ceph so I don't know if is it's easier or not. I use just a couple of time Gluster and I'll found very easy to setup but not simple to recover in case of failure of a node... Ceph seems to me a little bit complicated because use more components (OSD, Networking, Journaling etc...)...
  18. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Tnx Ness. I will looking into it. It's a little cluster we talking about, with slow performance... maybe I will upgrade to 10Gb SFP network card for every host (+ a 10Gb switch), to boost perfomances (if we use a shared storage) ... There will be few VM's for internal test and use (Win and...
  19. A

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    Hi, I have a 3 dell Poweredge R610 24GB RAM and 6 SAS 300GB 10K rpm each. I wish to build a cluster (mostly KVM) with shared storage system between this 3 nodes, and I will use internal storage to do. I was thinking to use Ceph or GlusterFS, but I'm not sure what is the best choice. Each...
  20. A

    Starting new test project... which hardware to buy?

    Hi, I wish to start an internal test project based on at least 3 PVE node and a centralized storage for host VM's. Since it is an internal development project, I would like to save money by buying the necessary hardware on Ebay. There is a world out there on ebay, most hardware is in good...