Shared Storage Recommendation for Proxmox Cluster

abdulwahab

New Member
Jul 1, 2024
4
0
1
Dears, I'm preparing to setup 3 node Proxmox Cluster using Dell R740 for our production systems. I am trying to decide between using CEPH storage for the Cluster / Shared storage using iSCSI. Which is the best option for Shared Storage in case of 3 node Proxmox cluster? I need a reliable solution to support live VM migration from one host to another in case of host failure.
 
Hi @abdulwahab , welcome to the forum.

Both Ceph and, practically any, iSCSI storage will provide reliable live VM migration.
However, neither will provide live migration in case of host failure. That functionality is not available in PVE/QEMU yet.

The choice between Ceph, iSCSI, or NVMe/TCP comes down to finer details of your use case, budget, skill, location, high availability needs, capacity, etc.

There is no one right answer. As with many things in IT - it depends.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
We need the VMs to be highly available. That's the reason looking for a reliable storage solution that can be shared between hosts. For the VMs, storage capacity planned is 12 TB.

I did initial research and I found that iSCSI doesn't support snapshots. Requires the advice from experts who is running small proxmox clusters with budget friendly hardware but without losing reliability.
 
We need the VMs to be highly available.
That's not problem as in "if it crashes, it'll be started on another node". The next better option is fault tolerance, which would be to have a standby VM running and syncing all the time on another node and immediately taking over if a host fails. This is currently not possible in PVE and a very restrictive and expensive additional cost on VMware.

did initial research and I found that iSCSI doesn't support snapshots.
As always: "it depends". If you have a storage that is capable of doing this, you can surely have snapshots, e.g. ZFS-over-iSCSI offers snapshots.

Requires the advice from experts who is running small proxmox clusters with budget friendly hardware but without losing reliability.
If you go with iSCSI, buy a dual-controller box with PVE storage support, e.g. the blockbridge hardware.
 
The integration of CEPH into Proxmox makes it really easy to setup shared storage. We use a 3-node-Ceph-Cluster only for storage, no virtualization but I would recommend at least 5 nodes for Ceph in terms of availability and resilience.

Another option, like blockbride, would be linstor/drbd. It's also a commercial product and the support experience has been really good. But like it has been said before:

The choice between Ceph, iSCSI, or NVMe/TCP comes down to finer details of your use case, budget, skill, location, high availability needs, capacity, etc.

If you have no experience at all, I would recommend to build a cheap lab environment to test out the different possibilities. Three Intel NUCs are a cheap way to setup a small testing cluster.
 
  • Like
Reactions: UdoB
@bbgeek17 , how about a version of BB that can be installed on older ( 2 x hpe 380 DL gen 8/9 24 bay spinners) hardware. Marketed the same way as Proxmox with some sort of feature restrictions aka only two nodes of the storage per proxmox cluster.
 
Hi @jtremblay, thank you for your inquiry. I appreciate your thoughts on repurposing hardware. However, it's important to keep in mind that end-of-life equipment can present challenges. It's more susceptible to failure, no longer supported by the manufacturer, and replacement parts can be hard to find.

For long-term reliability and availability, it's often more effective to address potential issues early, before they lead to bigger problems. While repurposing parts might offer some initial savings, using less reliable solutions can end up being more costly in the long run, especially for critical workloads. Sometimes, investing upfront can provide more value and peace of mind in the long term.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
ISCSI is definitely bugged, i'm tryng to fix issue with a LENOVO SAN = Netapp with ridiculus read performance without success, debug started 20 days ago..
 
@Testani Its extremely unlikely that there is something wrong with Linux iSCSI implementation used by PVE. We have a ton of it in production and continuously test every release going back to PVE6. We're not seeing any issues anywhere.

My recommendation would be to double-check your network configuration. Start with looking for dropped packets and MTU issues. If its not obvious call your vendor! They should be able to assist.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
no network issue, same hardware , same san, same switch and same configuration using VMWARE/hyperv work like a charm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!