One LVM for multiple nodes

yaUser

Member
Jan 27, 2021
4
1
8
Hello all,

We have a cluster with 3 nodes.
On one node hard disks have been added and configured as LVM.
The idea is to make this LVM also available on the other nodes.
Therefore, the other nodes were added to the LVM in the cluster administration and the checkboxes for "Activate" and "Distribute" were set.
The LVM is now also displayed under the other nodes, but with a question mark.
If you go there to the overview, it says "Yes" for Enabled and "No" for Active.
Which settings are still missing for access?
Can someone give me a hint?
 
Thank you for the link.
I have read through the page again. Unfortunately it doesn't really help me.
In the storage.cfg the LVM is set to "shared 1".
With "pvesm status" on the node where the disks are, the LVM is shown as "active".
On the other nodes it says "inactive".

How can I set this to "active"?
 
If you set "shared 1" or enable the "shared storage" checkbox PVE won't make that storage a shared storage. With that option you only tell proxmox to handle a storage that is already sharable as a shared storage.
As far as I know LVM and ZFS doesn't support to be used as a shared storage. So if you want a shared storage you should try CEPH or use a NFS/SMB share.
 
Based on the documentation, LVM can be used for this purpose. https://pve.proxmox.com/wiki/Storage:_LVM

Storage Features​

LVM is a typical block storage, but this backend does not support snapshots and clones. Unfortunately, normal LVM snapshots are quite inefficient, because they interfere with all writes on the entire volume group during snapshot time.
One big advantage is that you can use it on top of a shared storage, for example, an iSCSI LUN. The backend itself implements proper cluster-wide locking.
Today I managed to set up the iscsi + multipath + lvm bundle, make this storage shared and use it for VM & LXC migration without copying data.
The sequence is simple. Iscsi are connected, multipath is configured, PV, VG is created on one node, then LVM storage is created using PVE. After that, iscsi + multipath is configured on the rest of the cluster nodes.

Снимок экрана 2021-09-03 230632.png
root@pve09:/etc# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 iscsi lvm2 a-- <20.00t 19.96t
/dev/sda1 pve09_silver lvm2 a-- 9.09t 9.09t
/dev/sdb3 pve lvm2 a-- <891.70g 15.99g
root@pve09:/etc# vgs
VG #PV #LV #SN Attr VSize VFree
iscsi 1 2 0 wz--n- <20.00t 19.96t
pve 1 3 0 wz--n- <891.70g 15.99g
pve09_silver 1 0 0 wz--n- 9.09t 9.09t
root@pve09:/etc# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vm-100-disk-0 iscsi -wi-a----- 8.00g
vm-1000-disk-0 iscsi -wi-a----- 32.00g
data pve twi-a-tz-- <756.27g 0.00 0.25
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
root@pve09:/etc# multipath -ll
mpath0 (36340a981002c2cf0de2bab3a0000000c) dm-5 HUAWEI,XSG1
size=20T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 16:0:0:1 sdd 8:48 active ready running
| `- 18:0:0:1 sdf 8:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 15:0:0:1 sdc 8:32 active ghost running
`- 17:0:0:1 sde 8:64 active ghost running
root@pve09:/etc#
 
  • Like
Reactions: markc and Dunuin
@Mike Tkatchouk I think the important part to highlight is that you are using _thick_ LVM, so as the VG gets sliced a thick LV gets created for each virtual disk. This LV is used by one node only (where the VM is active) and that is enforced by PVE.
This is a fine and fully supported method by PVE but has few serious drawbacks - no Snapshots or Linked Clones, sub-optimal space usage.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Based on the documentation, LVM can be used for this purpose. https://pve.proxmox.com/wiki/Storage:_LVM

Today I managed to set up the iscsi + multipath + lvm bundle, make this storage shared and use it for VM & LXC migration without copying data.
The sequence is simple. Iscsi are connected, multipath is configured, PV, VG is created on one node, then LVM storage is created using PVE. After that, iscsi + multipath is configured on the rest of the cluster nodes.

View attachment 29207
Hi MIke. Just a question. You set up multipath and lvm on one node, and add the storage as shared. So, did you need to configure that in the other nodes after joining the cluster? or they get the configuration (and "see" the shared storage) automatiocally after joninng the cluster? Dou you have to manually configure multipath on every node?
Thanks!
 
@lmonasterio You have to manually configure iSCSI/multipath across all nodes. Make sure it properly starts up after reboot.
LVM is done once from any of the nodes that can see the storage. The PVE configuration that uses LVM volume should be distributed by cluster subsystem across all nodes.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: lmonasterio
Thanks @bbgeek17 !!! Whats your experience? Works fine the shared LVM? I am making a lab to prepare de production environment. The only difference is we use SAS storage (HP P2000), but for multipath is the same i guess. One con: i ll miss snapshots...
 
Whats your experience?
Works for years, really.

The only difference is we use SAS storage (HP P2000), but for multipath is the same i guess.
Yes, iSCSI, SAS and FC are all the same (to the underlying scsi layer) and it works as it should be.
Best performance with "real" parallel multipath over multiple links etc ...

One con: i ll miss snapshots...
Yes, very big UNFORTUNATELY.

We have local SSD on one node to test stuff locally before moving it to the SAN. Not good, but works.
Alternatively: run a storage VM in PVE itself, e.g. ZFS inside of the VM and export it via iSCSI to the nodes as ZFS-over-iSCSI in PVE, so that you have HA storage. Also not good, but works.
 
Hi again @LnxBil and @bbgeek17 !!!
Im setting up a SAS Shared Storage with multipath. Im using LVM, but a question arise. i can create the PV directly on "/dev/mapper/multipathX" or i can make a partition using fdisk and the create the PV on "/dev/mapper/multipath-partitionX", so: is there any difference? Wich option is better? I did both and they are working as shared storage, but perhaps i should choose one over the other.
Hope i am clear explaining the situation.
Thanks for your time,
Leandro
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!