LVM over iSCSI, missing in GUI disk list

zolitacs

Member
Jun 10, 2021
12
0
6
39
Hello!
My cluster consists of 7 nodes. I would like my iSCSI storage to be available on all of them. Here's how I did it:

This is a two step procedure and can be fully configured via the web interface.

  1. First, add the iSCSI target. (On some iSCSI targets you need to add the IQN of the Proxmox VE server to allow access.)
    • Click 'Add iSCSI Target' on the Storage list
    • As storage name use whatever you want but take care, this name cannot be changed later.
    • Give the 'Portal' IP address or servername and scan for unused targets
    • disable 'use LUNs direcly'
    • Click save
  2. Second, add an LVM group on this target.
    • Click 'Add LVM Group' on the Storage list
    • As storage name use whatever you want but take care, this name cannot be changed later.
    • For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.
    • For 'Base Volume' select a LUN
    • For 'Volume Group Name' give a unique name (this name cannot be changed later).
    • Enable shared use (recommended)
    • Click save
The new LVM appears on every node, but it is not active (except for the one I am logged in), and in the storage list in the Proxmox GUI, it does not appear in the Disks/LVM list either.

1671026151472.png

I have to restart node and then it appears and everything works.

1671026008183.png

Is there a solution to this without rebooting?
 
I tried the following command without success:

systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvescheduler
systemctl restart pvestatd

systemctl reload pvedaemon
systemctl reload pveproxy
systemctl reload pvescheduler
systemctl reload pvestatd
 
Are you sure the LUN is present across all nodes? Does "lsscsi" present it on each one?
Have you used an LVM-thin to provision this LUN by chance? Only LVM thick is supported for shared iSCSI storage and the slices of that VG will be active only on one node at a time, ie where the VM is running.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
node1:
1671054135902.png

node2:
1671054171803.png

node3:
1671054223044.png

until I restarted node 2, it didn't work there either, but since then everything is fine. live migration also works. it would probably work on node3 too if I restarted it...however, I wanted to solve it without restarting.

root@node2:~# lsscsi
[0:0:0:0] cd/dvd MATSHITA DVD-RAM UJ870BJ SA14 /dev/sr0
[4:0:0:0] disk FUJITSU MBE2073RC D906 -
[4:0:1:0] disk IBM-ESXS ST973452SS B62C -
[4:1:5:0] disk LSILOGIC Logical Volume 3000 /dev/sda
[5:0:0:0] disk TrueNAS iSCSI Disk 0123 /dev/sdb

root@node3:~# lsscsi
[0:0:0:0] cd/dvd TSSTcorp CDDVDW TS-L633B IB03 /dev/sr0
[4:2:0:0] disk IBM ServeRAID M5015 2.0. /dev/sda
[5:0:0:0] disk TrueNAS iSCSI Disk 0123 /dev/sdb
 
root@node3:~# pvscan
PV /dev/sdb VG PROXMOX-LUN-0 lvm2 [<10.00 GiB / <4.00 GiB free]
PV /dev/sda3 VG pve lvm2 [66.55 GiB / 8.25 GiB free]
Total: 2 [<76.55 GiB] / in use: 2 [<76.55 GiB] / in no VG: 0 [0 ]

root@node3:~# vgscan
Found volume group "PROXMOX-LUN-0" using metadata type lvm2
Found volume group "pve" using metadata type lvm2

root@node3:~# lvscan
inactive '/dev/PROXMOX-LUN-0/vm-135-disk-0' [6.00 GiB] inherit
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [26.64 GiB] inherit
ACTIVE '/dev/pve/data' [21.66 GiB] inherit
ACTIVE '/dev/pve/vm-130-disk-0' [6.00 GiB] inherit

root@node3:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 67.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 66.6G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 26.6G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data-tpool 253:4 0 21.7G 0 lvm
│ ├─pve-data 253:5 0 21.7G 1 lvm
│ └─pve-vm--130--disk--0 253:6 0 6G 0 lvm
└─pve-data_tdata 253:3 0 21.7G 0 lvm
└─pve-data-tpool 253:4 0 21.7G 0 lvm
├─pve-data 253:5 0 21.7G 1 lvm
└─pve-vm--130--disk--0 253:6 0 6G 0 lvm
sdb 8:16 0 10G 0 disk
sr0 11:0 1 1024M 0 rom

root@node3:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 66.55g 8.25g

root@node3:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- 66.55g 8.25g

root@node3:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 21.66g 27.70 1.73
root pve -wi-ao---- 26.64g
swap pve -wi-ao---- 8.00g
vm-130-disk-0 pve Vwi-a-tz-- 6.00g data 100.00
 
i found on node 2 in journalctl (at this time i restarted node2):

Dec 14 14:13:47 node2 kernel: scsi host5: iSCSI Initiator over TCP/IP
Dec 14 14:13:47 node2 kernel: scsi 5:0:0:0: Direct-Access TrueNAS iSCSI Disk 0123 PQ: 0 ANSI: 7
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: Attached scsi generic sg4 type 0
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: Power-on or device reset occurred
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] 2621444 4096-byte logical blocks: (10.7 GB/10.0 GiB)
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] 16384-byte physical blocks
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] Write Protect is off
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] Mode Sense: 7f 00 10 08
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] Optimal transfer size 1048576 bytes
Dec 14 14:13:47 node2 kernel: sd 5:0:0:0: [sdb] Attached SCSI disk
Dec 14 14:13:47 node2 systemd[1]: Starting LVM event activation on device 8:16...
Dec 14 14:13:47 node2 lvm[1313]: pvscan[1313] PV /dev/sdb online, VG PROXMOX-LUN-0 is complete.
Dec 14 14:13:47 node2 lvm[1313]: pvscan[1313] VG PROXMOX-LUN-0 run autoactivation.
Dec 14 14:13:47 node2 lvm[1313]: 0 logical volume(s) in volume group "PROXMOX-LUN-0" now active
Dec 14 14:13:47 node2 systemd[1]: Finished LVM event activation on device 8:16.
Dec 14 14:13:48 node2 iscsid[894]: Connection1:0 to [target: iqn.2005-10.org.freenas.ctl:sharing-2-iscsi, portal: 192.168.6.2,3260] through [iface: default] is operational now


on node i found this error:

Dec 14 23:02:49 node3 pvestatd[1264]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
 
When I want to create a VM on node3 on this LVM (), I get this error messasgo in the log:


create full clone of drive scsi0 (STREAMER-TESZT:133/vm-133-disk-0.qcow2)
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
TASK ERROR: clone failed: no such volume group 'PROXMOX-LUN-0'
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!