Shared Storage for a PVE Cluster

Discussion in 'Proxmox VE: Installation and configuration' started by stefanzman, Jul 8, 2018.

  1. stefanzman

    stefanzman Member
    Proxmox VE Subscriber

    Joined:
    Jan 12, 2013
    Messages:
    37
    Likes Received:
    0
    We are proving a proposal to a client and would like to recommend the best option to have shared storage for their cluster(s).

    I have been looking through the forum during the past couple of days and found several threads on this topic. Unfortunately, I have not determined that the most recommended or popular choice.

    My first thought was using something like a DELL Powervault (or SuperMicro equivalent ) with iSCSI, but some of the posts have suggested that the current iSCSI implementation for PVE is not great (old drivers, unstable, bad performance ). For example:

    https://forum.proxmox.com/threads/s...th-dell-equallogic-storage.43018/#post-215008

    https://forum.proxmox.com/threads/shared-storage-for-proxmox-cluster.37455/#post-213759

    Is this still true, or have things been updated?

    Also, in the second thread, some posters recommend just mounting as NFS. But others then chime in that this is too slow and will it will not allow for Snapshots.

    Just hoping to get the latest insight on question of best choice shared cluster storage. No hardware has been purchased yet, and the budget is flexible - so all options are all the table.
     
  2. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,168
    Likes Received:
    268
    Already considered using ceph? Such setup is more robust and scalable (future proof).
     
  3. stefanzman

    stefanzman Member
    Proxmox VE Subscriber

    Joined:
    Jan 12, 2013
    Messages:
    37
    Likes Received:
    0
    Thanks, Dietmar. I will ask the client if they would consider Ceph. They have been talking about a NAS or SAN with the DELL Powervault, so I am not sure from a hardware perspective. I assume we would need separate machine(s) running Linux to create a Ceph shared storage instance that would be available to the three PVE nodes in the clusters. What type of equipment would be used to create a 50TB ceph shared storage in this case?
     
  4. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,168
    Likes Received:
    268
    Ceph is a distributed storage system, so you need several nodes for the storage (at least 4). See

    https://ceph.com/
     
  5. stefanzman

    stefanzman Member
    Proxmox VE Subscriber

    Joined:
    Jan 12, 2013
    Messages:
    37
    Likes Received:
    0
    Yes. This is what I thought. There are additional physical hardware and infrastructure considerations for Ceph. I will discuss this option with the customer, but they seem to be a bit reticent to move much beyond the straight and narrow. Even if Ceph is unquestionably the "best" choice, they may not view it to be the right one.

    Proxmox VE is being considered and compared to a VMWare solution for this project, there is comfort level with VMWare and DELL Powervault. I am trying to limit the amount of unknown quantities on the table.

    With regard to the original question, is iSCSI not a good methodology to connect shared customer storage for the current version of Proxmox?
     
  6. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,168
    Likes Received:
    268
    iSCSI is most times a single point of failure, but used by many people. But AFAIK it is very stable (the post you mention refers to the server side implementation of FreeNAS).
     
  7. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    ... but you can use 2 different iSCSI server, and then at the client you can configure raid1!
     
    Knuuut likes this.
  8. Knuuut

    Knuuut Member

    Joined:
    Jun 7, 2018
    Messages:
    36
    Likes Received:
    3
    That's what I'm thinking about over a year ago. Did you have any expiriences on this construction?
     
  9. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    It works for me for 2-3 years if I remember. It was very simple ....:
    - server A/B = iscsi server A/B
    - on a linux client I have conect to A an B, then I have make mdraid 1 (then partition/ext4)
    - I have do the same thing using a win2002 srv, and I create a similar mirror(dynamic disk if I remember)

    This setup I have used for store some backups, nothing more. In several occasions, I have seen some mirror resync(because one iscsi server was not available on the client). But I also think that better performance could be obtain using AoE(not Age Of Empires , only ATA over Ethernet) insted of iSCSI.
     
    #9 guletz, Jul 9, 2018
    Last edited: Jul 9, 2018
  10. Knuuut

    Knuuut Member

    Joined:
    Jun 7, 2018
    Messages:
    36
    Likes Received:
    3
  11. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87

    I have using 2 centos at that time, where by default write-intent bitmaps is ON by default !
     
  12. Knuuut

    Knuuut Member

    Joined:
    Jun 7, 2018
    Messages:
    36
    Likes Received:
    3
    That's good to know. Now I need time and hardware to test performance and reliability...
    ...any existing use-cases are welcome.
     
  13. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    Another solution that I use it was this:

    - 2 external servers, with glusterFS(replicated briks)
    - a VIP using ucarp, and glusterFS as NFS server runining on VIP
    - on PMX nodes, I use the NFS server via VIP
     
  14. stefanzman

    stefanzman Member
    Proxmox VE Subscriber

    Joined:
    Jan 12, 2013
    Messages:
    37
    Likes Received:
    0
    Yes. I was thinking we would be using dual iSCSI storage to avoid a single point of failure. But I did want to include FreeNAS as one of the options. So, is the PVE <> iSCSI <> FreeNAS setup currently not a stable configuration? What about 2 Dell MD3xx0i devices or using a Synology soluiton?

    I was also thinking about a separate ZFS host connected via iSCSI, but it sound like this would not truly provide "shared" storage. There is another topic right about this one where this is discussed: https://forum.proxmox.com/threads/proxmox-ha-on-shared-san-storage.45150/#post-215682
     
  15. stefanzman

    stefanzman Member
    Proxmox VE Subscriber

    Joined:
    Jan 12, 2013
    Messages:
    37
    Likes Received:
    0
    If Ceph is not an immediate option (due to the 4 node minimum req), what is the preferred method for shared storage with a PVE cluster and iSCSI?

    Glutez - you had mentioned dual Centos with default write-intent bitmaps enabled? Also GlusterFS?
     
  16. mir

    mir Well-Known Member
    Proxmox VE Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,442
    Likes Received:
    91
    I would suggest Omnios which is now OmniosCE.
     
  17. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    Yes. By default is write-intent bitmaps enabled!
     
  18. stefanzman

    stefanzman Member
    Proxmox VE Subscriber

    Joined:
    Jan 12, 2013
    Messages:
    37
    Likes Received:
    0
    Very good, and working well?

    How is it otherwise configured? Is the connection through iSCSI? How are thexpected drives formatted?
     
  19. czechsys

    czechsys Member

    Joined:
    Nov 18, 2015
    Messages:
    122
    Likes Received:
    3
    How much PVE nodes? Did you considered DAS?
    You said 50TB, but what about performance?
    Did you considered for example dual NetApp with NFS/iscsi?

    And main question...where you will need start before storage:
    How will you backup? What is RTO/RPO requirements? Because Veeam is major player there and with VMWare its win-win situation.
     
  20. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    563
    Likes Received:
    87
    Yes. For many years it was no problem regarding this, like I said before.
    ext4
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice