Proxmox, Fibre Channel, and 3PAR

B. Weishoff

New Member
Feb 22, 2024
3
1
1
Hello.

So I've gone through what I think are the docs regarding PROXMOX and fibre channel presented LUN's. I'm using an older 3PAR 7200 in my lab, attempting to get shared storage between a PROXMOX cluster. So far, I have a fully operational cluster with a shared NFS volume, which performs as expected. The system can migrate to all PVE hosts, etc.

My question is this, with vmware, I can use the 3PAR with fiber presented LUNs as a shared filesystem. I'm a bit confused with the docs as to weather this can be accomplished with PROX. Am I just barking up the wrong tree, or am I SOL?

I can see the paths to the storage, but my understanding is you need to use multipath-tools and LVM, which is allegedly supported for shared storage in a cluster ? I walked through setting up multipath.conf, then used LVM to create a group & volume, and then laid down an ext4 filesystem on said volume, however, PROX doesn't seem to want to let me actually use it for guest systems? Is there some kind of trick to getting this to work ?

Thanks in advance.

-b
 
  • Like
Reactions: patefoniq
My question is this, with vmware, I can use the 3PAR with fiber presented LUNs as a shared filesystem. I'm a bit confused with the docs as to weather this can be accomplished with PROX. Am I just barking up the wrong tree, or am I SOL?
Yes, you can do this but the concepts, technologies and functionality is different.
In VMware you have a proprietary shared clustered filesystem VMFS, that allows you to create VMDK files and safely access them from multiple hosts at the same time.
In Proxmox there is no native/built-in equivalent. You have a few options:
- shared LVM. Its an adopted technology that will carve up your LUN into predictable regions (hence no thin provisioning) and where Proxmox will control and arbitrate which host in the cluster has access to it. There is no multi-host access to the same region.
- Clustered Filesystem. You can use one of the Open Source variants, however the installation, configuration and support is on your own.
- Various options of putting a more flexible head on top of your FC LUNs that will present iSCSI, iSCSI/ZFS, NVMe/TCP back to Proxmox.
- Use direct LUN approach where you dedicate a LUN to a specific VM virtual disk (image). This does not scale well in large/complex environments.

can see the paths to the storage, but my understanding is you need to use multipath-tools and LVM
Yes, if you want path redundancy you need to configure Multipath. PVE is based on Debian with Ubuntu Kernel. Configuring multipath is a standard procedure, non-PVE specific.
then used LVM to create a group & volume
yes, correct. Ensure you use correct variant of LVM
and then laid down an ext4 filesystem on said volume, however, PROX doesn't seem to want to let me actually use it for guest systems? Is there some kind of trick to getting this to work ?
No, no filesystem is involved. You feed PVE the Volume Group in the storage configuration and PVE will auto-magically carve up LV slices from it and pass-through to VM as raw devices. What you do with these raw devices inside your VM is up to you.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Yes, you can do this but the concepts, technologies and functionality is different.
In VMware you have a proprietary shared clustered filesystem VMFS, that allows you to create VMDK files and safely access them from multiple hosts at the same time.
In Proxmox there is no native/built-in equivalent. You have a few options:
- shared LVM. Its an adopted technology that will carve up your LUN into predictable regions (hence no thin provisioning) and where Proxmox will control and arbitrate which host in the cluster has access to it. There is no multi-host access to the same region.
- Clustered Filesystem. You can use one of the Open Source variants, however the installation, configuration and support is on your own.
- Various options of putting a more flexible head on top of your FC LUNs that will present iSCSI, iSCSI/ZFS, NVMe/TCP back to Proxmox.
- Use direct LUN approach where you dedicate a LUN to a specific VM virtual disk (image). This does not scale well in large/complex environments.


Yes, if you want path redundancy you need to configure Multipath. PVE is based on Debian with Ubuntu Kernel. Configuring multipath is a standard procedure, non-PVE specific.

yes, correct. Ensure you use correct variant of LVM

No, no filesystem is involved. You feed PVE the Volume Group in the storage configuration and PVE will auto-magically carve up LV slices from it and pass-through to VM as raw devices. What you do with these raw devices inside your VM is up to you.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

It's amazing -- 5 minutes after I posted this, I figured it out. Thank you, though, @bbgeek17 , for your prompt reply!

So for those that may have the same question -- present the luns, get them configured using the multipathd, create a new volume group using the standard LVM procedure, but don't put a filesystem down. Once you've done this, simple use the PVE storage manager app at the CLI to add the storage (make sure all your nodes are online and see the storage, etc.). Easy-peasy.

Thanks all.

-B.
 
Once you've done this, simple use the PVE storage manager app at the CLI to add the storage (make sure all your nodes are online and see the storage, etc.)
Keep in mind that in addition to thin provisioning you will also be missing out on Proxmox integrated snapshots, or any snapshots in most cases.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: B. Weishoff
If someone can explain/answer a few questions since this thread is similar to issues I am facing. We are testing proxmox for POC and possible future implementation. I just want to clarify that my linux knowledge is not impressive. We have a Huawei storage connected to two proxmox hosts via FC. The hosts have FC HBAs. I have done the following :
1. Made a cluster with 2 hosts - PVE 8.2.2
2. Presented the LUNs to the hosts
3. Rescanned both hosts for changes
4. Installed and configured multipath on both proxmox hosts :
a) apt-get install multipath-tools
5. Edited the following files, in order :
a) /etc/multipath.conf
b) /etc/multipath/wwids
c) /etc/multipath/bindings
6. Restarted multipathd service

Now when I run multipath -ll, I can see the LUNs with the alias names I created in /etc/multipath.conf
When I check /dev/mapper/ the new LUN sometimes does not show up as the alias name but as "mpatha". Only when I delete it from /etc/multipath/bindings it gets renamed as the alias name I put in /etc/multipath.conf

My questions :
Do I need to create a pv and vg for the same LUN on every host in the cluster via CLI and only create the shared LUN via PVE GUI?
Is this a safe configuration regarding R/W access to the LUN by only one host at the time - does proxmox have the safety mechanisms?
I understand there is no native snapshot support while using this configuration, would it be better to pivot to SCSi or CEPH?
I can't find many materials online where FC is being used with proxmox hosts so I would appreciate it greatly if someone could clear this up for me...

Best regards,

- N
 
Do I need to create a pv and vg for the same LUN on every host in the cluster
If you properly configured all hosts and they all see the LUNs, the paths, and DM devices - you do NOT need and should NOT be creating PV/VG on every host. Creating the PV/VG on a single host (does not matter which one) should be visible to all connected hosts.

Is this a safe configuration regarding R/W access to the LUN by only one host at the time - does proxmox have the safety mechanisms?
The safety is provided by Proxmox controlling where the LVM slice is activated (based on where the VM is active) and by using a Proxmox-specific Cluster Lock to prevent simultaneous metadata assignment (LV creation) from multiple nodes.
I understand there is no native snapshot support while using this configuration, would it be better to pivot to SCSi or CEPH?
Can you clarify what you mean by "SCSi"?
It's not possible to advise on what would be better for you without known details about your: use case, budget, skills, requirements, etc.
I can't find many materials online where FC is being used with proxmox hosts so I would appreciate it greatly if someone could clear this up for me.
FC is a connectivity and transport method. It is identical in its high-level integration with Proxmox to SAS, generic iSCSI, genetic NVMe/TCP, etc.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
If you properly configured all hosts and they all see the LUNs, the paths, and DM devices - you do NOT need and should NOT be creating PV/VG on every host. Creating the PV/VG on a single host (does not matter which one) should be visible to all connected hosts.
First of all, thank you so much for your replies!

Okay, so if I got this right : You only need to configure multipath.conf on both hosts to whitelist the LUNs by WWID, and PG/VG creation is necessary only on one of them if multipathing is configured correctly and hosts are in a cluster?
What happens in the case if the host that has the PG/VG initialized goes down, does the second host know how to continue operations using that LUN if I initialize it as LVM via GUI?

The safety is provided by Proxmox controlling where the LVM slice is activated (based on where the VM is active) and by using a Proxmox-specific Cluster Lock to prevent simultaneous metadata assignment (LV creation) from multiple nodes.
Thank you for confirming this.

Can you clarify what you mean by "SCSi"?
It's not possible to advise on what would be better for you without known details about your: use case, budget, skills, requirements, etc.
I know it is just a different protocol. I meant "iSCSi" since I have read that it has "basic integration in PVE GUI/CLI/API" (that is one of your replies to a different thread - https://forum.proxmox.com/threads/proxmox-7-0-huawei-oceanstore-dorado-3000v6.133594/ ). We want to avoid manual management so it is not a problem if we ditch FC HBAs and just use iSCSi if it has benefits and makes our lives easier...

We are using both FC and iSCSi SAN but are looking to migrate from VMWare. This current setup on proxmox uses FC.
We are currently trying Proxmox as POC and we are trying to reach the same functionality as we've had : HA, easy management, VM backups so I'm looking for recommendations how to best utilize our current setup or change it...
I was looking at the "Available Storage Types" table at : https://pve.proxmox.com/wiki/Storage but I'm not sure if CEPH will be viable because of added costs of hardware (local storage). I still have a lot to read and learn because I only started to fiddle around with it 4 days ago...

Thanks in advance!
 
What happens in the case if the host that has the PG/VG initialized goes down
You are not initializing the host you are initializing the disk. With proper lock/cache management (that PVE is responsible for) - all hosts are aware of the change on the disk.
I meant "iSCSi" since I have read that it has "basic integration in PVE GUI/CLI/API"
The native integration is basic. It saves you 3-4 steps you'd need to do during the initial connectivity phase of a new host. Once the raw disks show up - the daily management is identical to FC connected storage.
so I'm looking for recommendations how to best utilize our current setup or change it...
If you are open to updating your infrastructure to extract the most value out of the Proxmox you should reach out to storage vendors that have native integration with Proxmox for all of the functionality that the enterprise might expect.
I was looking at the "Available Storage Types" table at : https://pve.proxmox.com/wiki/Storage
There are other commercial solutions that are not listed in that table. But if you don't have a budget for this project, then using what you already have is the primary choice.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: justinclift

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!