Dell EMC ME5024 storage with proxmox

Infrawizmj

New Member
Mar 24, 2025
1
0
1
Hi

I have 3 Dell PowerEdge R660 servers there is attached to a Dell EMC ME5024 Storage Array with sas cables.

I have set Promox up before but not with multiply servers with access to a Storage Array.

All 3 servers see the disk as you can se in this screen dump, how can I make a storage in Promox with access of all 3 servers ?

bc1cdad10a02699aa27a1deebc8d0a797e86696f3269ccf6f97970b512384fe80e1eb062bafaf73a?t=2591835416e4ffa0f49d3d2e72496652


I am moving from Citrix Xen on my old hardware to Promox with this new setup.

regards Michael Jørgensen
 
Hi
I basically have the same hardware, just 2 instead of 3 servers. I got everything working with multipath and both host can see it.
Now where I'm stuck is, that I can't find a cluster aware filesystem, which is working in this setup. I have tried a bit with gfs2, but I read somewhere, that this is not supported by proxmox, so not really an option.

I have come across the blockbridge link before, but the limitations of LVM (snapshot and thin provisioning) is kind of not what I'd like to see.

What other supported option do I have?
 
Last edited:
Hi @TobiasW , welcome to the forum.

Now where I'm stuck is, that I can't find a cluster aware filesystem
There is no CAF that is built into PVE. As such, there is no officially supported PVE-endorsed CAF. The choices you have are limited to the freely available Open Source variants: OCFS and GFS.
The installation, configuration and support is for you to figure out. There are many guides online that have step-by-step instructions to assist you.

If you are planning to purchase PVE subscription and wondering how CAF self-deployment will affect support - you should reach out to Proxmox directly and inquire.

What other supported option do I have?
All of the PVE supported options are listed here: https://pve.proxmox.com/wiki/Storage

3rd party vendor supported options, including Blockbridge, are also available but are not endorsed by Proxmox in one way or the other.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @bbgeek17

Thanks for the quick answers.
I went through most of the gfs setup, which i had almost work. My main concern is, how is proxmox seeing this? In terms of, is the cluster aware of this shared storage, so I can move VM's live? If that would not be working, there is no point for a cluster (unless I'm not seeing something).
We also a lot of additionals packets and configs, I don't wanna end up in a situation, where I need more then community support for a problem, but the technican is then telling me, that what I have created is far away from supported environments and there would be no help.
 
My main concern is, how is proxmox seeing this?
When you are all set and done with GFS config, the PVE will only know about "Directory" storage pool in it's /etc/pve/storage.cfg

You would have marked this Directory storage pool as SHARED, informing PVE that it should expect to see this pool on all cluster nodes, and that it is indeed the same pool across all of them.

The Live migration will then work the same way it does on the NFS pool. The ownership of the QEMU files will be transferred by PVE cluster as necessary between the nodes.

We also a lot of additionals packets and configs, I don't wanna end up in a situation, where I need more then community support for a problem, but the technican is then telling me, that what I have created is far away from supported environments and there would be no help.
Generally there are two ways to guarantee Vendor support:
a) use fully supported infrastructure and set up
b) get a custom contract in place that outlines your deviations as supported

If you've reached out to Proxmox Server Solutions GmbH and they informed you that you are ok to deploy this way, I don't see a reason to doubt it.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Ok, this with the shared storage would make sense to me. I guess I need to start fresh again and setup gfs properly and see how it goes.

I will also reach out to proxmox and have a chat with them, about what we plan to do. The thing is, that the hardware is already brought and it was designed to be use with vmware, which would be no problem. If there wouldn't be the ongoing licence issue/drama...
Now we need to find another way with the hardware, as we can't give it back.
 
If there wouldn't be the ongoing licence issue/drama...
We hear this from prospective customers on a daily basis. Some have made a rather larger investment in very high-end SANs.
So you can take solace in the fact that you are not alone in this journey.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I can't use NFS
Why can't ?
Disk are 6x 1.92 TB SSD
Very small amount of disks for a system what supports far over 300 ...
Do 1x raid6 with your 6 drives or 1x raid10. Do 1 volume and map it to all your 6 ports to the 3 servers, setup same multipath on all 3 servers,
mkfs.xfs (/ext4) on the volume on 1 server (eg pve1) and mount it on the one eg as /me5, having just the dir /me5 on all 3 servers, export /me5 with fsid definition to all 3 servers. Define a "flying" static IP in same subnet as the 3 servers in your DNS (or 3 times local in /etc/hosts) and set it as second IP on your eg. pve1 (eg ifconfig eno3:0 192.168.1.34 (if eg your 3 servers have 192.168.1.31+32+33)). Define in datacenter storage /me5 as shared nfs.
Doing regulary pve updates on all 3 servers with the pve vm/lxc ha settings and the node maintenance mode, don't have to move the volume between for because nfs clients wait for a rebooting nfs-server and everything (vm/lxc) is going on normal after reboot, mount, reexport and getting the IP.
If pve1 has hw problems shut it down, mount the volume on eg pve2, reexport and set the flying IP to pve2 too, manual failover done.
You just must be aware to just mount the volume filesystem just on 1 host any time so don't never define in fstab without "nofail,noauto,..."
as it should be mounted by yourself or eg. like pacemaker etc, second do same with the flying IP which should just be defined on the host with has the filesystem.
 
Last edited:
Tip: Acquire used 100Gbit OPA equipment (switch, cards and their cables) between your 3 Dells too for your nfs service and vm/lxc migrations.
 
Why can't ?
ME5024 is a block storage device.

I bet this is the case and I belive it will increase. It's just not a "stable" solution anymore.
the design criteria in PVE seems to prioritize the intergration of ceph in lieu of optimizing for shared SAN. This doesn't mean that using a SAN isn't possible, its just that you have to consider what tradeoffs you are willing/able to sustain for production use, namely:
1. If you can live without snapshots, you can use your SAN for LVM-think pools. You still have the benefit of thin provisioning since your SAN is handling that internally.
2. If you must have snapshots, you can map individual LUNs to your guests directly. You will need to manage the snapshots externally from PVE, and handle in-guest quiescence seperately (this can be scripted/automated but would depend on you to do it.) If you have the development resources and budget available, you can develop a PVE storage plugin to provide integration- and please publish it back into pve so others can benefit; the ME5024 uses dothill's controllers which are used by a bunch of commercials products.
3. If you are able/willing to spend money, you can engage Blockbridge to provide the necessary glue for full functionality.
 
This isn't the kind of professional solution I was hoping for. In case of a host in maintenance mode or being faulty, it sounds like a lot of hand work plus what is the point of using a SAN with HBA connection, if I have to use NFS at the end. I could just buy a storage with iSCSI interface which I guess would make it a lot easier.
I find it sad, that proxmox doesn't have anything like VMFS built in, even gfs2 sounds like a pretty good counterpart. Is a SAN with HBA such a "random" thing that they don't support this out of the box?
We have deciced internaly, that with this hardware design, we will stick to vmware and with the next hardware refresh, we will look out for another solution which need probably a different hardware design (like host with internal disk an ceph over it).
 
This isn't the kind of professional solution I was hoping for.
Thats an interesting way of putting it. "Professional" implies a person or company who specializes in a particular application or solution (as in, do it for a living, hence the "profession"). If that isnt you, nor someone else- it wouldnt be a professional solution to begin with.

n case of a host in maintenance mode or being faulty, it sounds like a lot of hand work plus what is the point of using a SAN with HBA connection
Where did you get that idea? there wouldn't be ANY; the whole point of the discussion above is how to facilitate snapshots, not in maintaining cluster coherence.

Is a SAN with HBA such a "random" thing that they don't support this out of the box?
who are "they?" I suppose that comes back to the first part of my post. PVE will work fine with a SAN such as your aforementioned ME5024.
 
Maybe I used the wrong word to describe it. I try it different: the way waltar was describing it, looked to me like something which is good enough for a home / test lab, but nothing I would want to have as a production system for a company.

Well we use a cluster mainly for high availability. All host are designed to be able to carry the whole load (as of speaking a 2 node cluster, 3 nodes would be 2 then). Of course the nodes can share the load while running both, but in case of update or failure, one node can handle it, which gives you time to fix it.

Yes they was meaning PVE. if it works fine, why can't I use it straight from the gui? I haven't seen a filesystem which would work out of the box (like gfs2). Otherwise maybe I would be better of, buying a iSCSI storage rather then one with HBA connection.