What's your opinion on the best Shared storage

Jun 15, 2022
4
0
6
Preface: I have been a ProxMox user/supporter for many years and love the platform and all that it offers. But every VM node I have built has used local storage to hold the VM disks. Now for the problem I am facing and I thought you guys would be the best one to ask about it.

Our company is in the process of building out a new VM stack and will be using ProxMox VE Enterprise for the hypervisor. We want to build an HA cluster, but we are running into a lot of different results when researching the best shared storage for the VM nodes. At first we thought about using NFS on a large TrueNAS server, but worry that the I/O speed will be too low to support very many VMs. At the moment we are running approximately 20 VMs with varied workloads and use cases, but that number could double in the next couple of years. Some Windows and a lot of Linux machines. File vs Block storage is another consideration for us. Which one is better for VM disks? After having talked to several storage vendors and getting quotes, we are slowly moving toward a CEPH cluster. I am a Linux professional (certified anyway) and have been using Linux for a couple of decades now, but have zero experience with CEPH. If you could give your opinion, it would be much appreciated.
Thank you in advance!

TLDR: What is the best shared storage solution for ProxMox in your opinion?
 
CEPH has the best feature set and if you do not already have a dedicated shared storage like NFS, FC or iSCSI, then it is the way to go. It's just as simple as that.
 
agreed: ceph is retty good.
i know just one other very good solution but its very expensive
 
Ceph looks very complex but in reality it's pretty straightforward.
I've set up in my lab 3 tiny form factor PCs with 6 USB 2.5Gbit Nics, connected to each other in Full Mesh, and one NVMe SSD in each.
I was pretty astonished of the result.
 
You could at linstor as well. Also hypeeconverged, but performs a whole lot better than ceph.
I will check it out. Thank you. We are in the process of talking to different vendors now and trying to get a feel for what works best for our use case. Is there a vendor out there that could build a Linstor cluster and offer support?
 
How nice would be if we had linstor built-in, like ceph.

The most hitting part is:
https://linbit.com/blog/how-does-linstor-compare-to-ceph/ said:
The downside to CRUSH is a complete loss of all data when Ceph isn’t operating normally. While a catastrophic failure of a LINSTOR cluster can be recovered by most Linux admins without extensive LINSTOR specific knowledge.
 
  • Like
Reactions: lDemoNl
Ceph, when used as built-in option in PVE, has the advantage of easy integration. Having said that - you need to carefully plan your resources for a production hyper-converged solution. Make sure you understand the Node, CPU, Network, Memory needs, as well as failure scenarios.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
We had our first meeting with Linbit on Friday and just from that initial call, I am feeling much better about Linstor for our use case. We are building out the VM nodes to the point where any single node can run all the VMs. It may be a waste of resources, but in the event of a massive failure, we can lose two entire servers and still operate at full capacity. If they can sync with each other, and all run local NVMe and SATA SSD storage you can't get a much better performance option. We even have the ability to sync to satellite locations.
 
@jonmckinney , I'm facing a similar challenge for the company that I'm consulting for and, like yourself, I've been doing some research on Ceph.

My findings (rather subjective), are:
  • for a full production cluster you need 3/2 pool configuration (minimum of 2, maximum of 3 replicas) which effectively allows for 1/3 of the raw space as usable space. This is a bit of a bummer for us.
  • one unknown is the impact of the Ceph cluster on the actual VM workload (although CPU and RAM are pretty generous of each of the 7-9 nodes we are planning to deploy).
  • I've been researching TrueNAS enterprise as centralized storage with NFS exports, but so far didn't find anything conclusive in terms of performance (Cephs vs NFS).
Hope this helps.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!