Hi guys, I need your help.
I'm starting on proxmox and I've mounted 3 clustered proxmox 7.1.2 hosts, but I need to attach a storage volume. Therein lies the problem. It turns out that these hosts do not have disk space to allocate VMs, however, I have a STORAGE iSCSI that works with multipath and I can attach it to the proxmox cluster. All cluster hosts and also storage have 4 x gigabit ethernet cards (1gbps) each host.
It turns out that when attaching the storage, I only got the LVM + iSCSI feature, where I gain in performance and lose the snapshots feature.
So I'm studying a best scenario that gives me the following with what I have:
- I want the snapshots feature;
- Network bandwidth performance.
I can create a NAS storage on Linux and attach iSCSI Storage to provide other resources if I want (Example: NFS, SMB, ZFS over iSCSI etc). But if it is really necessary and if this is the best case scenario.
When researching, I realized that the most interesting options (in my scenario) are:
1 - SMB3 with multichannel support (kernel 5.17);
2 - NFS with multipath support (alias session trunking);
3 - NFS with LACP or (Bond 3+4 (proxmox) + Bond layer 2+3 (switches)) (less desirable)
4 - GlusterFS or CephFS (I still don't know that well, but it seems that it only serves to replicate independent volumes between cluster hosts. I want all cluster hosts to attach the same volume and share this same volume, I don't know if GLusterFS or CephFS would do There's still the problem of CephFS recommending 10gbps network cards (I don't have this hardware) and bandwidth consumption (not desirable).
I've already ruled out ZFS over iSCSI because it doesn't provide multipath to proxmox. (unless you have an alternative to work around this problem).
I have 7 years experience with XenServer and 5 years with VMWare. On VMWare I used VMFS which solved this problem. In Xen it also shared the same volume easily over the network. It's the first time I'm trying to deal with a cluster that doesn't have its own filesystem method to use that maintains snapshot capability.
Anyway, those of you who have more experience with proxmox, can you help me suggest the best method for my existing scenario?
I appreciate the support.
I'm starting on proxmox and I've mounted 3 clustered proxmox 7.1.2 hosts, but I need to attach a storage volume. Therein lies the problem. It turns out that these hosts do not have disk space to allocate VMs, however, I have a STORAGE iSCSI that works with multipath and I can attach it to the proxmox cluster. All cluster hosts and also storage have 4 x gigabit ethernet cards (1gbps) each host.
It turns out that when attaching the storage, I only got the LVM + iSCSI feature, where I gain in performance and lose the snapshots feature.
So I'm studying a best scenario that gives me the following with what I have:
- I want the snapshots feature;
- Network bandwidth performance.
I can create a NAS storage on Linux and attach iSCSI Storage to provide other resources if I want (Example: NFS, SMB, ZFS over iSCSI etc). But if it is really necessary and if this is the best case scenario.
When researching, I realized that the most interesting options (in my scenario) are:
1 - SMB3 with multichannel support (kernel 5.17);
2 - NFS with multipath support (alias session trunking);
3 - NFS with LACP or (Bond 3+4 (proxmox) + Bond layer 2+3 (switches)) (less desirable)
4 - GlusterFS or CephFS (I still don't know that well, but it seems that it only serves to replicate independent volumes between cluster hosts. I want all cluster hosts to attach the same volume and share this same volume, I don't know if GLusterFS or CephFS would do There's still the problem of CephFS recommending 10gbps network cards (I don't have this hardware) and bandwidth consumption (not desirable).
I've already ruled out ZFS over iSCSI because it doesn't provide multipath to proxmox. (unless you have an alternative to work around this problem).
I have 7 years experience with XenServer and 5 years with VMWare. On VMWare I used VMFS which solved this problem. In Xen it also shared the same volume easily over the network. It's the first time I'm trying to deal with a cluster that doesn't have its own filesystem method to use that maintains snapshot capability.
Anyway, those of you who have more experience with proxmox, can you help me suggest the best method for my existing scenario?
I appreciate the support.
Attachments
Last edited: