Shared Storage for a PVE Cluster

stefanzman

Renowned Member
Jan 12, 2013
46
0
71
USA - Kansas City
www.ice-sys.com
We are proving a proposal to a client and would like to recommend the best option to have shared storage for their cluster(s).

I have been looking through the forum during the past couple of days and found several threads on this topic. Unfortunately, I have not determined that the most recommended or popular choice.

My first thought was using something like a DELL Powervault (or SuperMicro equivalent ) with iSCSI, but some of the posts have suggested that the current iSCSI implementation for PVE is not great (old drivers, unstable, bad performance ). For example:

https://forum.proxmox.com/threads/s...th-dell-equallogic-storage.43018/#post-215008

https://forum.proxmox.com/threads/shared-storage-for-proxmox-cluster.37455/#post-213759

Is this still true, or have things been updated?

Also, in the second thread, some posters recommend just mounting as NFS. But others then chime in that this is too slow and will it will not allow for Snapshots.

Just hoping to get the latest insight on question of best choice shared cluster storage. No hardware has been purchased yet, and the budget is flexible - so all options are all the table.
 
Just hoping to get the latest insight on question of best choice shared cluster storage. No hardware has been purchased yet, and the budget is flexible - so all options are all the table.

Already considered using ceph? Such setup is more robust and scalable (future proof).
 
Thanks, Dietmar. I will ask the client if they would consider Ceph. They have been talking about a NAS or SAN with the DELL Powervault, so I am not sure from a hardware perspective. I assume we would need separate machine(s) running Linux to create a Ceph shared storage instance that would be available to the three PVE nodes in the clusters. What type of equipment would be used to create a 50TB ceph shared storage in this case?
 
Yes. This is what I thought. There are additional physical hardware and infrastructure considerations for Ceph. I will discuss this option with the customer, but they seem to be a bit reticent to move much beyond the straight and narrow. Even if Ceph is unquestionably the "best" choice, they may not view it to be the right one.

Proxmox VE is being considered and compared to a VMWare solution for this project, there is comfort level with VMWare and DELL Powervault. I am trying to limit the amount of unknown quantities on the table.

With regard to the original question, is iSCSI not a good methodology to connect shared customer storage for the current version of Proxmox?
 
With regard to the original question, is iSCSI not a good methodology to connect shared customer storage for the current version of Proxmox?

iSCSI is most times a single point of failure, but used by many people. But AFAIK it is very stable (the post you mention refers to the server side implementation of FreeNAS).
 
It works for me for 2-3 years if I remember. It was very simple ....:
- server A/B = iscsi server A/B
- on a linux client I have conect to A an B, then I have make mdraid 1 (then partition/ext4)
- I have do the same thing using a win2002 srv, and I create a similar mirror(dynamic disk if I remember)

This setup I have used for store some backups, nothing more. In several occasions, I have seen some mirror resync(because one iscsi server was not available on the client). But I also think that better performance could be obtain using AoE(not Age Of Empires , only ATA over Ethernet) insted of iSCSI.
 
Last edited:
I have using 2 centos at that time, where by default write-intent bitmaps is ON by default !

That's good to know. Now I need time and hardware to test performance and reliability...
...any existing use-cases are welcome.
 
That's good to know. Now I need time and hardware to test performance and reliability...
...any existing use-cases are welcome.

Another solution that I use it was this:

- 2 external servers, with glusterFS(replicated briks)
- a VIP using ucarp, and glusterFS as NFS server runining on VIP
- on PMX nodes, I use the NFS server via VIP
 
Yes. I was thinking we would be using dual iSCSI storage to avoid a single point of failure. But I did want to include FreeNAS as one of the options. So, is the PVE <> iSCSI <> FreeNAS setup currently not a stable configuration? What about 2 Dell MD3xx0i devices or using a Synology soluiton?

I was also thinking about a separate ZFS host connected via iSCSI, but it sound like this would not truly provide "shared" storage. There is another topic right about this one where this is discussed: https://forum.proxmox.com/threads/proxmox-ha-on-shared-san-storage.45150/#post-215682
 
If Ceph is not an immediate option (due to the 4 node minimum req), what is the preferred method for shared storage with a PVE cluster and iSCSI?

Glutez - you had mentioned dual Centos with default write-intent bitmaps enabled? Also GlusterFS?
 
How much PVE nodes? Did you considered DAS?
You said 50TB, but what about performance?
Did you considered for example dual NetApp with NFS/iscsi?

And main question...where you will need start before storage:
How will you backup? What is RTO/RPO requirements? Because Veeam is major player there and with VMWare its win-win situation.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!