How to build a best scenario for a Proxmox Cluster with 3 nodes and a single iSCSI Storage Volume - with snapshots and performance - Suggestions

sathlerds

Member
Feb 24, 2022
5
2
8
42
Brazil
Hi guys, I need your help.

I'm starting on proxmox and I've mounted 3 clustered proxmox 7.1.2 hosts, but I need to attach a storage volume. Therein lies the problem. It turns out that these hosts do not have disk space to allocate VMs, however, I have a STORAGE iSCSI that works with multipath and I can attach it to the proxmox cluster. All cluster hosts and also storage have 4 x gigabit ethernet cards (1gbps) each host.

It turns out that when attaching the storage, I only got the LVM + iSCSI feature, where I gain in performance and lose the snapshots feature.

So I'm studying a best scenario that gives me the following with what I have:

- I want the snapshots feature;
- Network bandwidth performance.

I can create a NAS storage on Linux and attach iSCSI Storage to provide other resources if I want (Example: NFS, SMB, ZFS over iSCSI etc). But if it is really necessary and if this is the best case scenario.

When researching, I realized that the most interesting options (in my scenario) are:
1 - SMB3 with multichannel support (kernel 5.17);
2 - NFS with multipath support (alias session trunking);
3 - NFS with LACP or (Bond 3+4 (proxmox) + Bond layer 2+3 (switches)) (less desirable)
4 - GlusterFS or CephFS (I still don't know that well, but it seems that it only serves to replicate independent volumes between cluster hosts. I want all cluster hosts to attach the same volume and share this same volume, I don't know if GLusterFS or CephFS would do There's still the problem of CephFS recommending 10gbps network cards (I don't have this hardware) and bandwidth consumption (not desirable).

I've already ruled out ZFS over iSCSI because it doesn't provide multipath to proxmox. (unless you have an alternative to work around this problem).

I have 7 years experience with XenServer and 5 years with VMWare. On VMWare I used VMFS which solved this problem. In Xen it also shared the same volume easily over the network. It's the first time I'm trying to deal with a cluster that doesn't have its own filesystem method to use that maintains snapshot capability.

Anyway, those of you who have more experience with proxmox, can you help me suggest the best method for my existing scenario?

I appreciate the support.
 

Attachments

  • diagram_options.png
    diagram_options.png
    175.8 KB · Views: 68
Last edited:
As you found there is no direct equivalent of VMFS that is native to PVE.

The path of least resistance would be to front-end the block storage that you have today with some sort of NAS that presents CIFS or NFS. The protocols are natively supported by PVE and are easy to configure. Your VMs will utilize QCOW storage format for their disks which gets you native Snapshots. The drawbacks are :
- a yet to be quantified hit on performance (iscsi>nfs/cifs>qcow) which may not be noticeable or matter to you. It really depends on your use case
- if HA is required - you will need to deploy NAS accordingly
- extra moving pieces

The other path is to direct connect block storage to PVE, i.e. your first diagram. Assuming that you storage has no integration option with PVE, going with a single or few large LUNs is best. On top of them you will need to deploy "Cluster aware File system": https://en.wikipedia.org/wiki/Clustered_file_system
There are many options available and you will have to decide which one works best for you. The Kernel/userland management of this file system will run on each PVE node. However its configuration and management will be completely independent of PVE. From Proxmox perspective it will be a "Directory" type storage where QCOWs will be placed. The benefits are: HA, Snapshots, relatively reduced complexity.
The drawbacks : having to manage the CFS independent of the Hypervisor. Depending on your skill set it may not be an issue at all.

The network redundancy options you listed are applicable to any of these solutions and which one you pick is up to you.

One other option that requires more manual work is to slice your iSCSI to provide a dedicated LUN per VM disk. The LUNs are connected to all hosts but PVE manages which Cluster Member/VM has access to it. This is how some of our early customers deployed before we developed native PVE storage plugin. Depending on how fluid your environment is, the benefit - no extra file system management, native HA on the storage/network side. This approach would roughly map to VVOLs on ESX.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Perfect explaination from @bbgeek17 with one addition:

However its configuration and management will be completely independent of PVE.
Just to be clear: This means you will not have support from Proxmox (the company) if there is problem with this setup. If you're not comportably with this, it's no option.

One other option that requires more manual work is to slice your iSCSI to provide a dedicated LUN per VM disk.
AFAIK the direct use of iSCSI luns with qemu (via the PVE GUI) works but has no multipath capability, only if you manually configure multipath.

Another way would be some stange setup like in the old VMware days with storage appliances. I use one for testing in a similar setup like yours but with FC-SAN: One big VM with ZFS and ZFS-over-iSCSI as server. It works, but is naturally not as fast as it could be.

If buying or building new stuff works for you, just build your own iSCSI server with ZFS and use ZFS-over-iSCSI or buy on of the shiny ones from @bbgeek17 company.
 
AFAIK the direct use of iSCSI luns with qemu (via the PVE GUI) works but has no multipath capability, only if you manually configure multipath.
When we used this scheme we pre-configured both iSCSI and multipath manually on each host. PVE cluster would then decide which one was active. Admittedly I have not tried to configure this in a while, ever since the plugin development.

Thank you for the plug :cool:

Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!