FC shared storage volume with LVM

bandzior

Renowned Member
Jun 2, 2015
30
0
71
Hello.

What is the proper way to add volume from FC/SCSI storage array to Proxmox cluster?

I have issue that after "rescan-scsi-bus.sh" command on all nodes, making an alias in multipathd.conf on all nodes, creating PV, VG and proper definition in storage.cfg on one cluster node, i have to restart other cluster nodes to use that new volume - before restarting, new volume has question mark on that nodes.

Regards,
Daniel
 
Hi @bandzior ,

Do the raw LUNs show up in lsblk and/or lsscsi output on each node? Does "multipath -ll" show proper devices? Are there any pertinent logs in "dmesg" and/or "journalctl -n [number]"?
What do "pvs,vgs,lvs" show on secondary nodes? What does "pvesm status" show"? Have you tried restarting "pvestatd" ?

Have you tried "pvscan --cache"?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @bandzior ,

Do the raw LUNs show up in lsblk and/or lsscsi output on each node? Does "multipath -ll" show proper devices? Are there any pertinent logs in "dmesg" and/or "journalctl -n [number]"?
What do "pvs,vgs,lvs" show on secondary nodes? What does "pvesm status" show"? Have you tried restarting "pvestatd" ?

Have you tried "pvscan --cache"?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
1 - Do the raw LUNs show up in lsblk and/or lsscsi output on each node

I didn't check that. I will check it out tomorrow. I assume that yes because i can see new block devices under "multipat -ll"

2. Does "multipath -ll" show proper devices?

Yes

3. Are there any pertinent logs in "dmesg" and/or "journalctl -n [number]"?

Each wolume has four paths and i can see that OS found new block devices - sdxx in dmesg

4. What do "pvs,vgs,lvs" show on secondary nodes?

when i do pvscan, vgscan - i can see those pvs and vgs but when i do pvshow or vgshow - i can see new volume, only on the first node.

5. Have you tried restarting "pvestatd" ?

No

6. Have you tried "pvscan --cache"?

No

IMHO this some issue with LVM subsystem.

Regards,
Daniel
 
IMHO this some issue with LVM subsystem.
Keep in mind that LVM was not meant for shared storage access on its own. PVE takes advantage of it with careful coordination via a higher application layer. The groups are activated as needed when an LVM slice is created or a VM is moved. They are not activate at all times on all nodes.

The first question you have to answer - why is the storage pool showing the question mark. The daemon responsible for health-checking and status-reporting is "pvestatd". It can get tripped up by various environmental issues. Its up to you to figure out which particular one affects your specific environment.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
Next week I will have to add some volumes to cluster and will check what is happening and will let you know.

Regards,
Daniel
 
I have issue that after "rescan-scsi-bus.sh" command on all nodes, making an alias in multipathd.conf on all nodes, creating PV, VG and proper definition in storage.cfg on one cluster node, i have to restart other cluster nodes to use that new volume - before restarting, new volume has question mark on that nodes.
Just from this description, I miss the pvscan on the other nodes.

What is the proper way to add volume from FC/SCSI storage array to Proxmox cluster?
I use this approach on all FC/iSCSI-technology based setups including PVE:

On one node:
  • rescan scsi bus
  • add multipath settings, reload multipath, check new names
  • add udev rules for permissions if applicable, reload udev
  • add physical volumes pvcreate
  • add with vgextend or create volume group with vgcreate
on all other nodes
  • add multipath settings, reload multipath
  • add udev rules, reload udev
  • rescan scsi bus
  • pvscan
If all physical volumes are present on all nodes, add them to PVE. A reboot is seldomy neccessary in Linux.
 
Just from this description, I miss the pvscan on the other nodes.
In my first message in this topic, I forgot to add pvscan and vgscan command on all nodes.
pvscan and vgscan on all nodes are showing new volumes but pvshow and vgshow are not showing new volumes besides of first node.
I use this approach on all FC/iSCSI-technology based setups including PVE:

On one node:
  • rescan scsi bus
  • add multipath settings, reload multipath, check new names
  • add udev rules for permissions if applicable, reload udev
  • add physical volumes pvcreate
  • add with vgextend or create volume group with vgcreate
on all other nodes
  • add multipath settings, reload multipath
  • add udev rules, reload udev
  • rescan scsi bus
  • pvscan
If all physical volumes are present on all nodes, add them to PVE. A reboot is seldomy neccessary in Linux.
I think that in this case I should restart udev but my procedure for other nodes is:
  • rescan scsi bus
  • add multipath settings, reload multipath
  • add udev rules, reload udev - I'm missing that point
  • pvscan
Regards, Daniel
 
udev is only necessary if you need special permissions, links or additional names. Often it is not neccessary on PVE.

In the end, the volume group should be found, it may also be neccessary to activate the volume group. You will see that in the output of pvs, vgs and lvs.
 
I am in the middle of adding new 6 volumes to the cluster.

On first node I see:

Code:
  /dev/mapper/VOL-10-25TB-F350                            lvm2 ---    25.00t   25.00t
  /dev/mapper/VOL-11-25TB-F350                            lvm2 ---    25.00t   25.00t
  /dev/mapper/VOL-12-25TB-F350                            lvm2 ---    25.00t   25.00t
  /dev/mapper/VOL-13-25TB-F350                            lvm2 ---    25.00t   25.00t
  /dev/mapper/VOL-14-25TB-F350                            lvm2 ---    25.00t   25.00t
  /dev/mapper/VOL-15-25TB-F350                            lvm2 ---    25.00t   25.00t

on other nodes:

pvscan:

Code:
  PV /dev/mapper/VOL-10-25TB-F350                                   lvm2 [25.00 TiB]
  PV /dev/mapper/VOL-11-25TB-F350                                   lvm2 [25.00 TiB]
  PV /dev/mapper/VOL-12-25TB-F350                                   lvm2 [25.00 TiB]
  PV /dev/mapper/VOL-13-25TB-F350                                   lvm2 [25.00 TiB]
  PV /dev/mapper/VOL-14-25TB-F350                                   lvm2 [25.00 TiB]
  PV /dev/mapper/VOL-15-25TB-F350                                   lvm2 [25.00 TiB]

but pvs is not showing new PVs.

In multipath I can see new SCSI devices, lsblk is showing new devices on all nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!