[SOLVED] Shared iscsi lvm not active on hosts in cluster

Oct 26, 2021
20
4
8
44
Sweden
Hi!

I'm trying to add a shared iscsi storage to our small cluster of three hosts. But i'm stuck at what feels like the last part.
I have made a LUN on our Nimble CS300 and successfully mounted the target on all three hosts as /dev/sdc

In the cluster GUI the iscsi target shows up happy on all three hosts Enabled and Active.
Then I tried to create LVM on top of it (nimble-lvm). And checked Enabled and shared, and Nodes (all (norestrictions).

It got created and the first host see the LVM and everything is fine. However the two other hosts has a question mark and has set it to Enabled: yes Active: No.

vgscan on all three hosts see the volume group

vgscan
Found volume group "nimble-vg" using metadata type lvm2
Found volume group "pve" using metadata type lvm2

But only the first host can see nimble-vg using vgdisplay.

--- Volume group ---
VG Name nimble-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <1.50 TiB
PE Size 4.00 MiB
Total PE 393215
Alloc PE / Size 0 / 0
Free PE / Size 393215 / <1.50 TiB
VG UUID H4wVKN-BiMP-6sTV-hq0N-YXE5-qR4W-g75fup

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 23
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <223.07 GiB
PE Size 4.00 MiB
Total PE 57105
Alloc PE / Size 53010 / 207.07 GiB
Free PE / Size 4095 / <16.00 GiB
VG UUID TyxcjS-puVQ-ZnxI-swQZ-QSkL-4NGT-cR8yNA

Do I need to do a rescan or something. Or mark is as active on the other hosts somehow?
Appreciate all the help i can get.


proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-14
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
What you are seeing is correct. iSCSI/LVM is raw storage. It provides no arbitration as to which host can write to which block. This is very different from a Cluster-aware File-system where you can write/read the same filesystem from multiple hosts and where CFS arbitrates to make sure nothing gets corrupted or races.

Think of it this way: the "iSCSI shared" is a different type of shared. It means that multiple hosts can get access to LUN, but it doesnt mean they get it at the same time for write. As opposed to VMFS or Windows Cluster Shared Volumes.

The "iSCSI shared", especially with LVM on top of it, means that volume can be easily accessed and activated on one host at a time as needed. The PVE will activate the volume if the currently active host fails, or you move the VM that owns a specific LVM slice. PVE as an orchestration tool will make sure that volume is active where needed.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: tstrand
Been busy so I haven't had the time to continue testing this until now. But live migration doesn't seem to work on ISCSI lvm

can't activate LV '/dev/nimble-vg/vm-109-disk-0': Cannot process volume group nimble-vg
ERROR: online migrate failure - remote command failed with exit code 255

Offline migration works ok. But the VM can't be started again.

TASK ERROR: can't activate LV '/dev/nimble-vg/vm-109-disk-0': Cannot process volume group nimble-vg

Moving it back to the first host makes it start again
 
Last edited:
You can execute command on another hosts without reboot:
pvcreate /dev/mapper/pv-lvm-device
The output will show "Can't initialize physical volume", but storage is activated immediately on host.
 
Last edited:
  • Like
Reactions: vdmeij
You can execute command on another hosts without reboot:
pvcreate /dev/mapper/pv-lvm-device
The output will show "Can't initialize physical volume", but storage is activated immediately on host.
Wouldnt "pvscan" with correct options be more appropriate? I believe, PVE uses that approach.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm having the same issue when adding a new LVM shared storage targeting a new LUN of an iSCSI.

On the node I create the storage from (logged in its web UI), I can use the storage fine, but not on any other nodes of the cluster. The LVM storage is active only on that one node:

Code:
❯ pvesm status | grep moon
moon lvm active 2147479552 0 2147479552 0.00%

Other nodes:

Code:
# pvesm status | grep moon
moon  lvm inactive 0 0 0 0.00%

What's interesting is vgs / vgscan, on the working node:

Code:
❯ vgs | grep moon
  moon 1 0 0 wz--n- <2.00t <2.00t

❯ vgscan | grep moon
  Found volume group "moon" using metadata type lvm2

Other nodes:

Code:
# vgs | grep moon
<no output>

# vgscan | grep moon
  Found volume group "moon" using metadata type lvm2


I had this problem before and rebooting the nodes with the inactive storage did activate them, but I'm looking for a solution that doesn't require a reboot.

Code:
# vgimport moon
  Volume group "moon" is not exported

This issue seems related: https://forum.proxmox.com/threads/vgs-problem-proxmox-7-worked-fine-on-proxmox-6.111187/

Code:
# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.11: 7.0-10
proxmox-kernel-6.8: 6.8.8-2
proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.17-1-pve: 5.11.17-1
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.4-1
proxmox-backup-file-restore: 3.2.4-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Code:
# cat /etc/pve/storage.cfg
[...]
lvm: moon
        vgname moon
        base some-ting-iscsi:0.0.5.scsi-<someid>
        content images
        saferemove 0
        shared 1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!