Shared Storage for a PVE Cluster

czechsys - for this particular customer, there will be 3 PVE nodes, and they did not want to consider direct attached storage option. Performance requirements are not extremely significant , but they do not want to implement a "slow" solution. A network appliance is definitely one of the methods under consideration, but we want to determine the best method of connection. Some of the posts here have suggested that mounting the storage with NFS is slow and not advisable:
https://forum.proxmox.com/threads/shared-storage-for-proxmox-cluster.37455/#post-213759. iSCSI would be preferred, but there have been posts here that this connection is not entirely up-to-date and stable. As for RTO/RPO, the customer is wanting to to have two full clusters across campus that replicate for a warm standby to cover DR

Aside from this particular client, we are wanting to understand the current / best options overall for PVE Cluster shared storage.

gultez - so are these two centos machines connected by iSCI to the PVE cluster and configured as mdraid?
 
so are these two centos machines connected by iSCI to the PVE cluster and configured as mdraid?

Hi,

Like I said I have use in the past this setup(2 x isccsi servere ==== network ====== client{md-raid}), but not in PMX. In any case I think that this is not important. From md-raid perspective is not important if the raid members are physical disks or remote disks(like iscsi targets).
 
One of my primary concerns here is the iSCSI drivers in Proxmox. Can we use any commercial pair of network appliances that support iSCSI as shared storage for a cluster and safely assume the connection will work OK?
Did you ever find the answer to this question?
 
Your storage (DELL) is probably already redundant (two power supply, two iscsi controllers, etc) why be worried about using iscsi for shared storage?

My recommendation, use LVM-tick filesystem on top of iscsi for shared storage.. and you will be good!!!

GlusterFS for my personal experience is not good option for larger RAW files, it will drive you crazy if a file-healing happens on a glusterFS filesystem with raw files greater than 50GB, and the CEPH option will require much more hardware and infrastructure, that, as I can see, you don't wanna to spend right now!!
 
Last edited:
Your storage (DELL) is probably already redundant (two power supply, two iscsi controllers, etc) why be worried about using iscsi for shared storage?

My recommendation, use LVM-tick filesystem on top of iscsi for shared storage.. and you will be good!!!

GlusterFS for my personal experience is not good option for larger RAW files, it will drive you crazy if a file-healing happens on a glusterFS filesystem, and the CEPH option will require much more hardware and infrastructure that as I can see you don't wanna to spend right now!!
The objective is to use multipathing to support HA and live migration. The question is how to protect multiple devices from spinning up and writing to the same VM in LVM systems
 
The objective is to use multipathing to support HA and live migration. The question is how to protect multiple devices from spinning up and writing to the same VM in LVM systems

Open-iscsi and multipathd can do that job on Debian regarding HA for the iscsi SAN networking, Proxmox has a built-in locking method for writes, LVM-tick on top of iscsi will use this method to take care of it!! don't worry, the only requirement is that all nodes must live on the same HA Cluster for proper write lock control! Also, be aware that: The supported files are only (images / rootdir), and SNAPSHOTs/Clones are not supported on LVM-tick, the same for templates/isos.
 
Last edited:
  • Like
Reactions: guletz
Open-iscsi and multipathd can do that job on Debian regarding HA for the iscsi SAN networking, Proxmox has a built-in locking method for writes, LVM-tick on top of iscsi will use this method to take care of it!! don't worry, the only requirement is that all nodes must live on the same HA Cluster for proper write lock control! Also, be aware that: The supported files are only (images / rootdir), and SNAPSHOTs/Clones are not supported on LVM-tick, the same for templates/isos.
I'm completely new to proxmox, do you mind pointing me to some resources that explain this concept of LVM-tick and how to set it up?
Also, do you know how to setup zfs on iscsi for equallogic if I want to enable snapshots?
 
I'm completely new to proxmox, do you mind pointing me to some resources that explain this concept of LVM-tick and how to set it up?

Maybe this can help you through: https://pve.proxmox.com/wiki/ISCSI_Multipath

After you setup the iSCSI connectivity to your storage , just create a LVM filesystem on Proxmox pointing to your iscsi LUN disk.

Also, do you know how to setup zfs on iscsi for equallogic if I want to enable snapshots?

For ZFS over iSCSI you will need to have a ZFS-enabled storage, zfs-over-iscsi is controlled on the NAS side, not on proxmox, but you can connect an iSCSI volume to your Proxmox as local-disk and use ZFS-local filesystem instead, but will not be able to have a shared storage using ZFS-local, only with ZFS-over-iSCSI, but again, your NAS must own the filesystem is this case...

Regarding Equallogic, Proxmox does not have an integration plugin for any storages vendors, so, having snapshot integration directly on EQL will not be possible, only using ZFS-local filesystem or NFS/CIFS...
 
Last edited:
Maybe this can help you through: https://pve.proxmox.com/wiki/ISCSI_Multipath

After you setup the iSCSI connectivity to your storage , just create a LVM filesystem on Proxmox pointing to your iscsi LUN disk.



For ZFS over iSCSI you will need to have a ZFS-enabled storage, zfs-over-iscsi is controlled on the NAS side, not on proxmox, but you can connect an iSCSI volume to your Proxmox as local-disk and use ZFS-local filesystem instead, but will not be able to have a shared storage using ZFS-local, only with ZFS-over-iSCSI, but again, your NAS must own the filesystem is this case...

Regarding Equallogic, Proxmox does not have integration plugin with storages vendors, so, having snapshot integration directly on EQL will not be possible, only using ZFS-local filesystem or NFS/CIFS...
Is there an option to snapshot to a separate storage pool, maybe a secondary NFS of some sorts? I'm moving from XS and the lack of snapshots is really noticeable. Can I run a server that exports the iscsi backend as a NFS? Seems unusual that XS is able to have the snapshots feature but not proxmox.
 
Is there an option to snapshot to a separate storage pool, maybe a secondary NFS of some sorts?

I think is not possible, since the snapshot file is a "point in time" of the original VM, the snap should reside in the same storage where the original vm disk is. Also you must be careful on this, maybe I'm wrong, but at least on a vmware environment, when you create a snap, all the I/O writes will point to this snap-file.. so, maintaining the snaps on a low performance storage is not a good choice!!! I'm not a proxmox pro/expert, but I think it uses the same logic!!!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!