[SOLVED] Add a node to the cluster with iSCSI shared storage

zvictorp

Active Member
Nov 27, 2019
5
2
43
39
Dear forum members,

After adding a newly installed PVE node to an existing cluster, I noticed that it does not automatically connect to the shared iSCSI storage. While the storage appears in the GUI, it doesn't create any associated VGs and LVs.

Access to iSCSI LUNs is organized through multipath.

Could you please recommend a safe method for adding a new node to such clusters?

Thank you in advance.
 
Last edited:
After adding a newly installed PVE node to an existing cluster, I noticed that it does not automatically connect to the shared iSCSI storage. While the storage appears in the GUI, it doesn't create any associated VGs and LVs.

Access to iSCSI LUNs is organized through multipath.
Hi @zvictorp , can you please provide more technical details about the state of your system? There are more than a few ways to arrive to the end state that you described "storage does not connect".

Run and provide the output from the following commands (using text encoded with CODE tags), from "good" and "bad" node:

cat /etc/pve/storage.cfg // single node is enough
pvesm status // both nodes
pvesm list [iscsi_storage] // both nodes
iscsiadm -m session // both nodes
lsblk // both nodes
pvs // both
vgs // both
lvs // both
journalctl -n 100 // after executing storage related operation

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @zvictorp , can you please provide more technical details about the state of your system? There are more than a few ways to arrive to the end state that you described "storage does not connect".
.....

Hi @bbgeek17
Thank you for the reply


cat /etc/pve/storage.cfg
https://pastebin.com/raw/ccFc2YET

pvesm status
https://pastebin.com/raw/Ps8QhPvs

pvesm status
https://pastebin.com/raw/GGLnEYwm

pvesm list
https://pastebin.com/raw/FJ8hDm7j

iscsiadm -m session
https://pastebin.com/raw/3XK83BXL

lsblk
https://pastebin.com/raw/v0xbXY3q

pvs/vgs/lvs
https://pastebin.com/raw/kNYu8Mr1

journalctl shall be provided later during a maintenance window
 
@zvictorp , I don't know how you managed to get into this state...

Your LVM is not based on iSCSI storage. So there is no helpful linking between two storages.
Your iSCSI storage entries are marked as disabled. This is something only a human can do.
I fully expect that after your next reboot of pve21 you won't have iSCSI connected and your LVM will fail too.
This also explains why pve22 never got iSCSI sessions established.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@zvictorp , I don't know how you managed to get into this state...

Your LVM is not based on iSCSI storage. So there is no helpful linking between two storages.
Your iSCSI storage entries are marked as disabled. This is something only a human can do.
I fully expect that after your next reboot of pve21 you won't have iSCSI connected and your LVM will fail too.
This also explains why pve22 never got iSCSI sessions established.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi @bbgeek17

Thank you for you reply.

As you may've noticed there is a volume group

lvm: shared_iscsi_1
vgname vg_iscsi
content images,rootdir
shared 1

created by multipath.

The iSCSI storage entries have been manually disabled because they reference the same LUN. Multipath creates a Volume Group (VG) named vg_iscsi, which is shared and enabled in the configuration. Your assumption is incorrect because automatic login to the iSCSI target does not depend on whether it is enabled in the pvesm configuration.

Thank you so much for your replies.

I believe that I need to manually discover targets and establish iSCSI connections. Subsequently, I should add the node to the cluster. I will verify this at a suitable time and post the outcome here.
 
. Multipath creates a Volume Group (VG) named vg_iscsi, which is shared and enabled in the configuration.
Hi @zvictorp , you are a bit confused about the layers of your storage infrastructure. Multipath is a Linux OS Kernel and Userland software that recognizes and manages multiple paths to storage. Multipath does not know how you manage storage above it, i.e. in your case using Volume Manager (LVM).
Multipath does not create LVM pieces or PVE storage pool definitions.

Your assumption is incorrect because automatic login to the iSCSI target does not depend on whether it is enabled in the pvesm configuration.
You are right. I must have missed in your posts where you mentioned that the iSCSI nodes/sessions were created manually outside of PVE.
Now that I am aware of it - it makes even more sense that the second host is not seeing the storage. Since it appears that you created those sessions manually, there is nothing syncing that configuration across the cluster and you need to do it manually again.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
After adding a new node to the cluster, you should perform the following steps. Steps, of course, may vary according to the environment:

1. Discover your iscsi targets
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.2
10.254.2.2:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.401
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.6
10.254.2.6:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.412
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.10
10.254.2.10:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.501
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.14
10.254.2.14:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.512

2. Login to all discovered targets
root@pve22:~#iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.501, portal: 10.254.2.10,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.512, portal: 10.254.2.14,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.401, portal: 10.254.2.2,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.412, portal: 10.254.2.6,3260] (multiple)
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.501, portal: 10.254.2.10,3260] successful.
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.512, portal: 10.254.2.14,3260] successful.
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.401, portal: 10.254.2.2,3260] successful.
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.412, portal: 10.254.2.6,3260] successful.


3. Verify iscsi sessions
root@pve22:~# iscsiadm -m session
tcp: [1] 10.254.2.10:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.501 (non-flash)
tcp: [2] 10.254.2.14:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.512 (non-flash)
tcp: [3] 10.254.2.2:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.401 (non-flash)
tcp: [4] 10.254.2.6:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.412 (non-flash)

4. At this point you should see new block devices. In my case sd[abcd] All of them reference to the same LUN on the NAS.
root@pve22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.7T 0 disk
sdb 8:16 0 1.7T 0 disk
sdc 8:32 0 1.7T 0 disk
sdd 8:48 0 1.7T 0 disk
nvme0n1 259:0 0 894.3G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 893.8G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 94G 0 lvm /
├─pve-data_tmeta 253:2 0 7.8G 0 lvm
│ └─pve-data 253:4 0 760.2G 0 lvm
└─pve-data_tdata 253:3 0 760.2G 0 lvm
└─pve-data 253:4 0 760.2G 0 lvm


5. Let's configure multipath.
Obtain a WWID of the LUN
/lib/udev/scsi_id -g -u -d /dev/sda
3600d0231000a2a8373ba0c873a18943d

Add the following lines to /etc/multipath.conf configuration file
blacklist {
wwid .*
}

blacklist_exceptions {
wwid "3600d0231000a2a8373ba0c873a18943d"
}

multipaths {
multipath {
wwid "3600d0231000a2a8373ba0c873a18943d"
alias s2lv1p1 # /dev/mapper/s2lv1p1
}
}
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}

You can copy the configuration directly from any node of the cluster.

Add WWID to multipath
multipath -a 3600d0231000a2a8373ba0c873a18943d

Restart multipath-tools
systemctl restart multipathd
systemctl restart multipath-tools

Apply new settings to multipath
multipath -r

6. Verify that all logical volumes of virtual machines are visible, and their hierarchy is correct:
root@pve22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.7T 0 disk
└─s2lv1p1 253:5 0 1.7T 0 mpath
├─vg_iscsi-vm--100--disk--0 253:6 0 20G 0 lvm
├─vg_iscsi-vm--108--disk--0 253:7 0 1G 0 lvm
├─vg_iscsi-vm--109--disk--0 253:8 0 1G 0 lvm
.....
sdc
....
sdb 8:16 0 1.7T 0 disk
....
sdd 8:48 0 1.7T 0 disk
└─s2lv1p1 253:5 0 1.7T 0 mpath
├─vg_iscsi-vm--100--disk--0 253:6 0 20G 0 lvm
├─vg_iscsi-vm--108--disk--0 253:7 0 1G 0 lvm
├─vg_iscsi-vm--109--disk--0 253:8 0 1G 0 lvm
....
 
Last edited:
  • Like
Reactions: Johannes S and UdoB
Thanks for posting this @zvictorp

You may want to add that the default setting for node.startup in iscsid.conf is manual. This will lead to sessions not being established after reboot.
As such, one would either want to change this in /etc/iscsid.conf or set the startup=automatic per node/session:
# To manually startup the session, use "manual". The default is manual.

Code:
# To request that the iscsi service scripts startup a session, use "automatic":
# node.startup = automatic
#
# To manually startup the session, use "manual". The default is manual.
node.startup = manual


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!