iSCSI disk not active on all clustered nodes

Boss

New Member
Apr 22, 2024
4
0
1
Have added iSCSI Disk successfully to Node 1 as Thick LVM and shows as Enabled & Active. Can see it in the web GUI on the other 2 nodes where it shows as Enabled but NOT Active.
  • 1) Checked the storage.cfg file and it's the same on all nodes.
  • 2) Running a pvscan & vgscan on Node 1 displays the disk details OK but is missing on the other 2 nodes.
  • 3) Run lsscsi command shows the disk on Node 1 but not the others

# lsscsi (Node 1)
[2:0:0:0] storage HP P420i 8.32 -
[2:1:0:0] disk HP LOGICAL VOLUME 8.32 /dev/sda
[3:0:0:0] disk QNAP iSCSI Storage 4.0 /dev/sdb
[4:0:0:0] disk QNAP iSCSI Storage 4.0 /dev/sdc


# lsscsi (Other nodes)
[2:0:0:0] storage HP P420i 8.32 -
[2:1:0:0] disk HP LOGICAL VOLUME 8.32 /dev/sda
[3:0:0:0] disk QNAP iSCSI Storage 4.0 /dev/sdb


How do I make the Disk Active on the other clustered nodes so that it can be shared across all three?
Thanks.
 
Last edited:
How do I make the Disk Active on the other clustered nodes so that it can be shared across all three?
Thanks.
It should become active when you have something on those nodes that uses the disk. Try to migrate a VM to the "bad" node, or create a new VM, backed by an LVM slice on the iSCSI.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thanks for your tip. I tried migrating a VM from a "bad node" to the "good node" but it doesn't show the iscsi disk in the Target Storage pick box.
At the moment there is no LVM configured on that iscsi node. Should I add LVM to that disk?
 
How did you setup the iSCSI and the storage in PVE?

Normally I would need to
  • login to the portal on each node
  • check that the disks are present on each node
  • create physical volume on one node
  • create volume group on one node
  • add storage as LVM on one node in the PVE Gui and check shared
Did you do something similar?
 
Ahhh... no, I have just attached the raw iscsi lun.
So I have to create the PV & VG manually from the command line first?
As mentioned above pvscan/vgscan only shows the disk on the first node. The same with lsscsi
 
OK - so I have now removed the iscsi disk so I can start afresh.
Is there a step by step doc detailing how to add an iscsi disk and creating the LVM on top of it so that it is accessible from all 3 nodes in the cluster? There is only a single path to the iscsi LUN.
 
Revised steps from above:
  • login to the portal on each node (iscsiadm)
  • check that the disks are present on each node (lsscsi)
  • create physical volume on one node (pvcreate)
  • create volume group on one node (vgcreate)
  • add storage as LVM on one node in the PVE Gui and check shared
 
Revised steps from above:
  • login to the portal on each node (iscsiadm)
  • check that the disks are present on each node (lsscsi)
  • create physical volume on one node (pvcreate)
  • create volume group on one node (vgcreate)
  • add storage as LVM on one node in the PVE Gui and check shared
I followed the steps and then I met the same problem
I hava three nodes I use iscsi on three nodes then pvcreat vgcreate then lvm named pve1 and I also checked shared.but the state of the lvm pve1 is unknown on the other two nodes
 
Last edited:
As mentioned above pvscan/vgscan only shows the disk on the first node. The same with lsscsi
Are you doing iSCSI connection manually, i.e. directly via iscsiadm, or are you adding Proxmox iSCSI storage pool?
If it's the former, then you need to set up that connection on each node, as @LnxBil mentioned. Don't forget to mark the session to auto-start, or it won't be present on reboot.

If its the latter, please provide the output of :
- cat /etc/pve/storage.cfg
- pvesm status (from each node)
- iscsiadm -m node (from each node)
- iscsiadm -m session (from each node)
- pvesm list [iscsi_storage_name] (from each node)

So I have to create the PV & VG manually from the command line first?
If you plan to use the iSCSI LUN as shared storage, yes you have to create the LVM structure manually. But only from one node.

I met the same problem. have you ever solved it?
You'd need to provide the same output that I listed above. However, it'd be cleaner if you opened a separate thread.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Are you doing iSCSI connection manually, i.e. directly via iscsiadm, or are you adding Proxmox iSCSI storage pool?
If it's the former, then you need to set up that connection on each node, as @LnxBil mentioned. Don't forget to mark the session to auto-start, or it won't be present on reboot.

If its the latter, please provide the output of :
- cat /etc/pve/storage.cfg
- pvesm status (from each node)
- iscsiadm -m node (from each node)
- iscsiadm -m session (from each node)
- pvesm list [iscsi_storage_name] (from each node)


If you plan to use the iSCSI LUN as shared storage, yes you have to create the LVM structure manually. But only from one node.


You'd need to provide the same output that I listed above. However, it'd be cleaner if you opened a separate thread.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
root@pve1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvm: lvm
vgname pve1
content rootdir,images
shared 1
root@pve2:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvm: lvm
vgname pve1
content rootdir,images
shared 1
root@pve3:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvm: lvm
vgname pve1
content rootdir,images
shared 1

root@pve1:~# pvesm status
Name Type Status Total Used Available %
local dir active 71017632 11527716 55836696 16.23%
local-lvm lvmthin active 148086784 17059597 131027186 11.52%
lvm lvm active 419426304 104857600 314568704 25.00%
root@pve2:~# pvesm status
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Name Type Status Total Used Available %
local dir active 71017632 7254868 60109544 10.22%
local-lvm lvmthin active 148086784 8648268 139438515 5.84%
lvm lvm inactive 0 0 0 0.00%
root@pve3:~# pvesm status
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Name Type Status Total Used Available %
local dir active 71017632 7255032 60109380 10.22%
local-lvm lvmthin active 148086784 8692694 139394089 5.87%
lvm lvm inactive 0 0 0 0.00%

root@pve3:~# iscsiadm -m node
10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0
root@pve2:~# iscsiadm -m node
10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0
root@pve1:~# iscsiadm -m node
10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0

root@pve1:~# iscsiadm -m session
tcp: [1] 10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0 (non-flash)

root@pve2:~# iscsiadm -m session
tcp: [4] 10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0 (non-flash)
root@pve3:~# iscsiadm -m session
tcp: [5] 10.0.1.107:3260,1 iqn.2017-08.zettastor.iqn:1113021025697932295-0 (non-flash)

root@pve3:~# pvesm list lvm
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Volid Format Type Size VMID

root@pve2:~# pvesm list lvm
Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Volid Format Type Size VMID

root@pve1:~# pvesm list lvm
Volid Format Type Size VMID
lvm:vm-103-disk-0 raw images 53687091200 103
lvm:vm-103-disk-1 raw images 53687091200 103
 
root@pve1:~# cat /etc/pve/storage.cfg
There was no need to repeat the output of "cat /etc/pve/storage.cfg". In a good cluster that file is located on a special shared file system and is the same on each node.

From your /etc/pve/storage.cfg output I can see that you don't have an iSCSI storage pool defined by PVE. Unless something was preventing you from adding it, it would be easier if you defined one. Then you can make your LVM pool dependent on the iSCSI pool, assuring the start order dependencies are more reliable.

That said, it should work as you have it now. You are solely responsible for making sure that iSCSI sessions are established on each node in your configuration. Please provide "lsscsi" output as requested by @LnxBil .

Note, if your iSCSI is properly configured and the sessions are logged in, the "lsscsi" should list the iSCSI disks. The "vgscan" error sometimes happens when the OS/Kernel is confused about the presence of the disks. In some cases, I've seen a reboot fixing the issue.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!