2 LUNs shows as multiple disks on node

anton.louw

New Member
Mar 4, 2020
1
0
1
32
Hi All,

I am fairly new to this, so it might be a simple issue. I have node that I have presented storage to from a SAN using FC. I have installed multipathing on the node, but if I run 'lsblk', I see multiple entries for my two LUNs. Is there something I am missing? I have attached screenshots for reference.

Any help would be greatly appreciated.

Thanks
 

Attachments

  • lsblk.png
    lsblk.png
    84.4 KB · Views: 61
  • mu;tipath-ll.png
    mu;tipath-ll.png
    26.5 KB · Views: 61
Yes, that's normal. You can install LVM on your multipathed luns in /dev/mapper/36*. I'd suggest to name your luns properly. An example for that is here.

You way would be:
- pvcreate /dev/mapper/<lun-name> (for all entries there)
- if you want everything in one big chunk of space, create a volumegroup with both luns vgcreate san-lvm /dev/mapper/<lun-name1> /dev/mapper/<lun-name2>
- now you can add a new LVM via the PVE gui and store everything on there.
 
  • Like
Reactions: freddypracso
Another solution is to partition, format, mount and use like local for storing qcow or raw files, in this case flag it like shared, i think proxmox cluster will manage concurrencies on access to the luns.
 
Another solution is to partition, format, mount and use like local for storing qcow or raw files, in this case flag it like shared, i think proxmox cluster will manage concurrencies on access to the luns.
Have you tried and tested that?

The "shared" flag for directory storages i meant for shared storages via Samba/Cifs or NFS. Accessing the same FS on a block level from multiple machines will corrupt it.
 
Have you tried and tested that?
:) Not yet, I'm working on it right now.

I'm trying to figure out how to implement that situation.

I have a twin blade setup on a Bladecenter backed by a Storewize 7000 FC attached with HBA adapter.

Already setup multipath seems working like a charm, what's the next steps? which is my options? this is the first time for me with such setup, normally i would use zfs or ceph
 
:) Not yet, I'm working on it right now.
Please don't spread ideas that you haven't tested out and confirmed working :)

Because mounting a regular file system in two systems at the same time will lead to corruption! PVE does not handle this.

As discussed in the other thread [0], either Thick LVM (which can be set up via the GUI) or a clustered file system like OCFS2 that is designed to be mounted by multiple systems at the same time.


[0] https://forum.proxmox.com/threads/ceph-over-multipath-device-or-something-else.69674
 
Please don't spread ideas that you haven't tested out and confirmed working :)

Because mounting a regular file system in two systems at the same time will lead to corruption! PVE does not handle this.

I've been incomplete in my answer, sorry I made a bit confusing. I'm going to try explain it better.

I know mounting a regular file system in two system will lead corruption.
What I evaluate like alternative to LVM is to mount one LUN in one node and another LUN in the other node. Using the lun mounted througth FC like a local disks.
In this case I was in error about shared flag, they have to be set unchecked on both nodes.

What is really not clear for me (if you can I would appreciate a link to some clear documentation) is why thin provisioned LUN cannot be hosted on a "shared storage". The meaning of "shared" in this case is intended like cuncurrency over proxmox hosts or is intended like EVERY storage shared by a third part and mounted in proxmox host?
Disks under SCSI FC hba adapter will result like local devices, admitting to not mount the same LUN on more than one host is it possible to use Thin-lvm?
Is in place any limitation due to multipath?

Thanks
 
What is really not clear for me (if you can I would appreciate a link to some clear documentation) is why thin provisioned LUN cannot be hosted on a "shared storage". The meaning of "shared" in this case is intended like cuncurrency over proxmox hosts or is intended like EVERY storage shared by a third part and mounted in proxmox host?

If the LUN is not shared in the sense of accessed at the same time, you can have thin-lvm on a LUN. The problem is just that LVM does not support accessing a thin-lvm on more than one host.

Disks under SCSI FC hba adapter will result like local devices, admitting to not mount the same LUN on more than one host is it possible to use Thin-lvm?
Is in place any limitation due to multipath?

If you only access it on one host, there is no problem. Ensuring this can be tricky, best to present the LUN only to one host.
 
If you only access it on one host, there is no problem. Ensuring this can be tricky, best to present the LUN only to one host.
Oh yes, I'll ban all except what I want to make part of multipath.

In proxmox gui the sd* are visible in disks. Is there a way to prevent proxmox gui see the disks exposed by hba and make gui see the mapper one?
 
In proxmox gui the sd* are visible in disks. Is there a way to prevent proxmox gui see the disks exposed by hba and make gui see the mapper one?

No, I don't think so. The minority of users use multipathed devices, so this is not the main use case of PVE. Another clue is that you have to install the multipath-tools after installing PVE, so it's not part of PVE and therefore also not part of the GUI.
 
Another clue is that you have to install the multipath-tools after installing PVE, so it's not part of PVE and therefore also not part of the GUI.

There is actually a Feature Request about this.

Would like to prevent all the /dev/sd* (except of sda) are shown there

Yes, I get what you want. Maybe open a feature request?
Personally, I never look in the Disks settings. I have my monitoring on the host itself and multipath is also only configurable on the CLI.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!