After adding a new node to the cluster, you should perform the following steps. Steps, of course, may vary according to the environment:
1. Discover your iscsi targets
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.2
10.254.2.2:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.401
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.6
10.254.2.6:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.412
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.10
10.254.2.10:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.501
root@pve22:~# iscsiadm -m discovery -t sendtargets -p 10.254.2.14
10.254.2.14:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.512
2. Login to all discovered targets
root@pve22:~#iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.501, portal: 10.254.2.10,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.512, portal: 10.254.2.14,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.401, portal: 10.254.2.2,3260] (multiple)
Logging in to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.412, portal: 10.254.2.6,3260] (multiple)
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.501, portal: 10.254.2.10,3260] successful.
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.512, portal: 10.254.2.14,3260] successful.
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.401, portal: 10.254.2.2,3260] successful.
Login to [iface: default, target: iqn.2002-10.com.infortrend:raid.uid666243.412, portal: 10.254.2.6,3260] successful.
3. Verify iscsi sessions
root@pve22:~# iscsiadm -m session
tcp: [1] 10.254.2.10:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.501 (non-flash)
tcp: [2] 10.254.2.14:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.512 (non-flash)
tcp: [3] 10.254.2.2:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.401 (non-flash)
tcp: [4] 10.254.2.6:3260,1 iqn.2002-10.com.infortrend:raid.uid666243.412 (non-flash)
4. At this point you should see new block devices. In my case sd[abcd] All of them reference to the same LUN on the NAS.
root@pve22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.7T 0 disk
sdb 8:16 0 1.7T 0 disk
sdc 8:32 0 1.7T 0 disk
sdd 8:48 0 1.7T 0 disk
nvme0n1 259:0 0 894.3G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part
└─nvme0n1p3 259:3 0 893.8G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 94G 0 lvm /
├─pve-data_tmeta 253:2 0 7.8G 0 lvm
│ └─pve-data 253:4 0 760.2G 0 lvm
└─pve-data_tdata 253:3 0 760.2G 0 lvm
└─pve-data 253:4 0 760.2G 0 lvm
5. Let's configure multipath.
Obtain a WWID of the LUN
/lib/udev/scsi_id -g -u -d /dev/sda
3600d0231000a2a8373ba0c873a18943d
Add the following lines to /etc/multipath.conf configuration file
blacklist {
wwid .*
}
blacklist_exceptions {
wwid "3600d0231000a2a8373ba0c873a18943d"
}
multipaths {
multipath {
wwid "3600d0231000a2a8373ba0c873a18943d"
alias s2lv1p1 # /dev/mapper/s2lv1p1
}
}
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
You can copy the configuration directly from any node of the cluster.
Add WWID to multipath
multipath -a 3600d0231000a2a8373ba0c873a18943d
Restart multipath-tools
systemctl restart multipathd
systemctl restart multipath-tools
Apply new settings to multipath
multipath -r
6. Verify that all logical volumes of virtual machines are visible, and their hierarchy is correct:
root@pve22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.7T 0 disk
└─s2lv1p1 253:5 0 1.7T 0 mpath
├─vg_iscsi-vm--100--disk--0 253:6 0 20G 0 lvm
├─vg_iscsi-vm--108--disk--0 253:7 0 1G 0 lvm
├─vg_iscsi-vm--109--disk--0 253:8 0 1G 0 lvm
.....
sdc
....
sdb 8:16 0 1.7T 0 disk
....
sdd 8:48 0 1.7T 0 disk
└─s2lv1p1 253:5 0 1.7T 0 mpath
├─vg_iscsi-vm--100--disk--0 253:6 0 20G 0 lvm
├─vg_iscsi-vm--108--disk--0 253:7 0 1G 0 lvm
├─vg_iscsi-vm--109--disk--0 253:8 0 1G 0 lvm
....