iSCSI - Disk not Active on all nodes in the cluster

Boss

New Member
Apr 22, 2024
24
0
1
I've got 3 PVE nodes in a cluster.
When I add an iSCSI Disk (All Nodes + Use LUNs directly selected), the disk is only added to 2 nodes.
Checking each PVE server with lsscsi & iscsiadm confirms that only 2 nodes can see the disk.
# lsscsi (Nodes 1 & 3)
[2:0:0:0] storage HP P420i 8.32 -
[2:1:0:0] disk HP LOGICAL VOLUME 8.32 /dev/sda
[3:0:0:0] disk QNAP iSCSI Storage 4.0 /dev/sdb

# lsscsi (Node 2)
[2:0:0:0] storage HP P420i 8.32 -
[2:1:0:0] disk HP LOGICAL VOLUME 8.32 /dev/sda

# iscsiadm -m session (Nodes 1 & 3)
tcp: [3] 172.16.17.143:3260,1 iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.nas401pve.853a72 (non-flash)

# iscsiadm -m session (Node 2)
iscsiadm: No active sessions.

Checking storage.cfg on ALL nodes shows that the disk is defined on all 3 nodes
iscsi: NAS-4-01-PVE_0
portal 172.16.17.143
target iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.nas401pve.853a72
content images

When I check the Proxmox GUI the iSCSI disk shows as Enabled & Active on Nodes 1 & 3 and Node 2 shows only as Enabled = Yes & Active = No

Checking the iSCSI target shows that only Node 1 & 3 are connected.

Is there any reason that Node 2 consistently refuses to connect to the iSCSI disk? I've removed the iSCSI disk several times from the cluster and re-added it again and it's always Node 2 that won't connect.
Any ideas or tips for me to look at?

PVE v8.1.3 is installed on all 3 nodes
 
AFAIK it will only become active on that node when that node actually uses that iSCSI. So try migrating/creating etc. something on that node using the iSCSI & it should become active. I assume the other 2 nodes have already used that iSCSI space but not the third node - hence the discrepancy.
 
Thanks for your responses - tried to manually add the path
# iscsiadm --mode node --targetname iqn.2004-04.com.qnap:ts-1232pxu-rp:iscsi.nas401pve.853a72 -p 172.16.17.143 --login
iscsiadm: No records found

I have not used iSCSI on the other Nodes 1 & 3 - they both became active when I added the iSCSI target via the Web GUI.
I tried migrating a VM to Node 2 but when I tried to allocate the target storage to the iSCSI disk it shows 0bytes Avail & Capacity so the migration fails.

I tried running a iscsi discovery from Node 2 and it reports the iSCSI disk so it obviously can connect to the target OK - it just can't login
 
Interesting development - After trying to migrate a VM from Node 2 and using the iSCSI disk (and failing) then running the iscsi discovery, I tried connecting to the iSCSI disk with --login , It returned a message indicating that the session was already active!
Sure enough the connection is now active - so I'm not sure what started the connection. Also checking the NAS shows that all 3 nodes are connected to the LUN.
Thanks all for your advice and speedy responses! Much appreciated!
 
Last edited:
I would examine the logs. The iSCSI storage, once configured, is constantly probed for health by PVE. It would have been interesting to compare, during the bad state, outputs of :
pvesm status
pvesm scan iscsi

As well as look at "journalctl -f". Its hard to say now which of your steps jiggered the connection, as I am sure you did a lot of tinkering between your initial post and the time it "started" working.
You could try to reboot all of your nodes at the same time and then examine the state.
Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!