Need advice: iSCSI + 3 node cluster

newbx

New Member
May 8, 2025
5
0
1
Hi,
I need advice whats best scenario to connect 3 node Proxmox cluster to PowerVault ME5024. NAS has 2x iSCSI connectors, and each Proxmox node has 2x10GB.
I used to have 10GB switch between NAS and cluster but now I decided to connect cluster directly to NAS. So far I connected each node to NAS like NIC1:Controller_A, NIC2:Controller_B. SO I have used 3/4 ports on each controller. I am planning to use 4th port on each controller to also connect Proxmox Backup Server.
Now I am at creating volumes and hosts on NAS and I wonder:
1) Should I create 1 host with 3 initiators for cluster and 2nd host with 1 initiator for backups server

2) or should I create 3 separate hosts with 1 initiator each for each node separetly ?

3) or I messed all up and should do it different, and how?
 
Hi @newbx , welcome to the forum.

With a 3 node Proxmox cluster, assuming all 3 nodes need access to the shared storage and each node has 2 network ports, you should:
a) Have a network switch, preferably two with MLAG connectivity
b) connect each client to both switches and use LACP
c) each client will have a unique IQN
d) your NAS host group (or whatever it is called for your particular vendor) will contain 3 unique IQNs, one from each host
e) if your PBS needs access to a separate LUN - it can be physically connected the same way as your PVE hosts. It needs to be in a separate "host group" with access to a dedicated LUN

These are best practices. Whether they are suitable for you will depend on your business requirements.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @newbx , welcome to the forum.

With a 3 node Proxmox cluster, assuming all 3 nodes need access to the shared storage and each node has 2 network ports, you should:
a) Have a network switch, preferably two with MLAG connectivity
b) connect each client to both switches and use LACP
c) each client will have a unique IQN
d) your NAS host group (or whatever it is called for your particular vendor) will contain 3 unique IQNs, one from each host
e) if your PBS needs access to a separate LUN - it can be physically connected the same way as your PVE hosts. It needs to be in a separate "host group" with access to a dedicated LUN

These are best practices. Whether they are suitable for you will depend on your business requirements.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I have only one switch for now, and I am afraid that it will be one point of failure, that's why I asked if its even possible to attach each node directly to controllers A and B of NAS. I tried this and problem was that when I was adding shared iSCSI storage to one node in GUI , two other nodes were auto populated with same storage but of course they couldn't connect to storage because they were on different ports connected on controller so NAS has different IP address for each of them. Will it be any benefit for cluster that each node is connected on different port on controller - or it wont work like that? Last time, when I was using switch , I created 2 VLANS on it one for different Controller and it was working, but as I mentioned I was affraid of one point of failure and tried different way now.
 
Will it be any benefit for cluster that each node is connected on different port on controller - or it wont work like that?
This is an architecture suitable for FC and SAS connections. It is not meant for IP connectivity.

If you insists on going that route, you will need to bypass PVE iSCSI management and create your iSCSI sessions manually via iscsiadm. You'd mark the higher layer LVM structure as shared and it will somewhat work.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
This is an architecture suitable for FC and SAS connections. It is not meant for IP connectivity.

If you insists on going that route, you will need to bypass PVE iSCSI management and create your iSCSI sessions manually via iscsiadm. You'd mark the higher layer LVM structure as shared and it will somewhat work.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank You,
I dropped this idea and went with solution with one switch. I was following this tutorial on YouTube https://www.youtube.com/watch?v=hKRjCwLRzvo , and I also ended up with 2x ISCSI storages , one for each iscsi controller, is this correct configuration ? I mean I dont see any errors and I set multipath on each node but 2 storages looks weird, and also when I was trying to create LVM over iSCSI i have to choose storage beside that its actually same. I also created 3 hosts on my NAS and added them to 1 hosts group, then I have attached LUN to that group. Am I doing it correctly ?
 
I have not viewed your tutorial.

It is not clear what you got and why.

I would recommend clearing everything out and starting from scratch. If you get stuck again, provide the CLI output in the text format using CODE tags. The following is a good start:
- cat /etc/pve/storage.cfg
- iscsiadm -m node
- iscsiadm -m session
- lsblk
- lsscsi
- multipath -ll


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Johannes S
I have not viewed your tutorial.

It is not clear what you got and why.

I would recommend clearing everything out and starting from scratch. If you get stuck again, provide the CLI output in the text format using CODE tags. The following is a good start:
- cat /etc/pve/storage.cfg
- iscsiadm -m node
- iscsiadm -m session
- lsblk
- lsscsi
- multipath -ll


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Code:
root@px1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

rbd: hdd_pool_1
        content images,rootdir
        krbd 0
        pool hdd_pool_1

iscsi: proxmox_shared
        portal 192.168.10.4
        target iqn.1988-11.com.dell:01.array.bcc0564dbfc4
        content none

iscsi: proxmox_shared_2
        portal 192.168.20.4
        target iqn.1988-11.com.dell:01.array.bcc0564dbfc4
        content none

root@px1:~# iscsiadm -m node
192.168.10.1:3260,1 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.10.2:3260,3 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.10.3:3260,5 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.10.4:3260,7 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.20.1:3260,2 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.20.2:3260,4 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.20.3:3260,6 iqn.1988-11.com.dell:01.array.bcc0564dbfc4
192.168.20.4:3260,8 iqn.1988-11.com.dell:01.array.bcc0564dbfc4

root@px1:~# iscsiadm -m session
tcp: [7] 192.168.20.4:3260,8 iqn.1988-11.com.dell:01.array.bcc0564dbfc4 (non-flash)
tcp: [8] 192.168.10.4:3260,7 iqn.1988-11.com.dell:01.array.bcc0564dbfc4 (non-flash)

root@px1:~# lsblk
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
.....
sdf                                                                                                     8:80   0  27.3T  0 disk 
└─3600144f028f88a0000005037a96d0001                                                                   252:9    0  27.3T  0 mpath
sdg                                                                                                     8:96   0  27.3T  0 disk 
└─3600144f028f88a0000005037a96d0001                                                                   252:9    0  27.3T  0 mpath

root@px1:~# multipath -ll
3600144f028f88a0000005037a96d0001 dm-9 DellEMC,ME5
size=27T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 16:0:0:0 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  `- 15:0:0:0 sdf 8:80 active ready running
 
I am not sure what your end result should be. Did you program every SP with 4 IP addresses? Or are the 1-3 leftovers from prior testing?
Perhaps what you have now is where you want to be. What does iscsiadm -m discovery -t st -p <portal_ip> return?

It is possible to only have one iSCSI storage pool defined, if the SAN returns all the required portal IPs in the discovery to any of the portals.
If it does not, then you either need to define both paths, or configure iSCSI sessions manually.

You have the sessions, you have the multipath - next step is LVM.
This article may be helpful: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox