tips for shared storage that 'has it all' :-)

mouk

Renowned Member
May 3, 2016
44
0
71
53
Hi all,

As many, we are also contemplating a move from broadcom/vmware to proxmox, and are starting with a PoC now.
I ran proxmox in the past with ceph cluster, so I know how great that combination it, but ceph is (now) not going to happen where I work, so: no ceph.
At the institute we have a netapp ONTAP and a compellent mostly serving iscsi LUNs. Compellent will be phased out, netapp is the future here.

I checked https://pve.proxmox.com/wiki/Storage, and since we want quick and efficient live migrations, we will need a shared storage.
We also need snaphots, and we would very much like to thin provision our VMs.

From how I understand it, that leaves us with qcow2 on NFS. However I read that performce wise this is always the best. (fragmentation, and qcow2 metadata overhead) And generally I'm not very enthousiastic about using NFS as shared storage for vm images: I have seen mounts go stale/become unresponsive a few too many times. (on NFS3, perhaps NFS4 is better...?)

What seems interesting is: map compellent multipath iscsi LUNs block devices to pve, put lvm-thin on it, and use that for VMs. However (from the wiki) I understand this is not cluster-safe.

The question here: is there anything I am overlooking? Given the above, what can you recommend to us? Can we expect new upcoming developments in this area..? For example lvm-thin becoming shared-able..?
 
Hi @mouk ,


What seems interesting is: map compellent multipath iscsi LUNs block devices to pve, put lvm-thin on it, and use that for VMs. However (from the wiki) I understand this is not cluster-safe.
It's not just not cluster-safe, it's not supported by PVE and you have to go out of your way to work around all the bumpers put in place to stop you.


The question here: is there anything I am overlooking? Given the above, what can you recommend to us?
There are a number of commercial vendors in this space. It all depends on your requirements, budget, and supportability needs.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Johannes S and mouk
Thanks for the replies, mir and bbgeek17, appreciated.

The intention (specially for this upcoming PoC) is to use what we already have in place, so we're not going to buy anything, and need no support. If we go PRD, specially the support part will of course change.
I will checkout regular LVM on compellent iscsi luns, and will report back if (on the the compellent side) this DOES turn out to be thinly-provisioned. (they make a big thing out of everything being thin provisioned, so who knows)
(does anyone happen to know..?)
 
Netapp has some docs around using there arrays with proxmox: https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html

you're options are essentially NFS, SMB/CIFS, LVM+iSCSI LVM+NVMe over TCP, LVM+FC, or to come up with another cluster file system that can use the iscsi backing.


Hope this helps a little bit. We are using HPE Nimbles with primarily LVM+iSCSI with Proxmox 9.1and it works pretty decently
 
  • Like
Reactions: mouk
I ran proxmox in the past with ceph cluster, so I know how great that combination it, but ceph is (now) not going to happen where I work, so: no ceph.
Ceph is the only "all features" supported shared solution easily available for PVE. Its also the most heavily worked on for other Virt platforms such as XCP-NG and various flavors of kvm. Your decision tree going forward heavily depends on WHY ceph was rejected.

If its a matter of available physical space for the required capacity, dense solutions exist; if its a matter of price, ceph will likely beat commercial solutions per usable tb. If its just "we already have storage, you have to use it" then use it and live without the missing features.

If you DO have budget, Blockbridge.