iSCSI on second cluster node

gob

Renowned Member
Aug 4, 2011
69
2
73
Chesterfield, United Kingdom
Hi

I have been running a single Proxmox VE node (original v4 Beta and now v4 release) since the v4 Beta was released. I have is connected to a DELL MD3000i iSCSI SAN with multipath enabled and working. There are two LUNs on the SAN with an LVM volume on each and I am using both. This has been working fine for months.
I have now created a cluster from that node and added a second node. The two iSCSI storage devices automatically showed up under my second node, I guess because there was no node restriction applied when I set up the storage originally.

If I select the iSCSI storage under my second node it displays 0 size and 0 available. So I assume I need to configure it again for the second node.

I have taken the second node out of the storage configuration for the iSCSI devices and then tried to add the iSCSI target again for the second node however if I use the web gui and put in the portal IP it never returns anything after scanning.

If I run the command below from command line on the second node I get the targets:
Code:
root@xxx:~# iscsiadm -m discovery -t st -p 10.4.130.60
10.4.130.60:3260,1 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
10.4.131.60:3260,1 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
10.4.130.61:3260,2 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
10.4.131.61:3260,2 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d

From here I am a bit stumped.
Should I be able to span the single iSCSI config over both nodes or do I have to re-add the iSCSI connections for the second node.
Either way I cannot find the LUNs from the web gui.

Any advice would be appreciated.

Thanks
Gordon
 
Thanks for your reply dietmar.
Yes, my mistake, it is the LVM storage on the iSCSI target that I can see on both nodes.
I have attached a screenshot of what I see in the gui.

Also here is the LVM info on the second node which doesn't show any of the shared iSCSI LVM volumes:
Code:
root@ms-200-prox02:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               557.62 GiB
  PE Size               4.00 MiB
  Total PE              142751
  Alloc PE / Size       138656 / 541.62 GiB
  Free  PE / Size       4095 / 16.00 GiB
  VG UUID               YtsAZ1-nfhK-wwAN-UqyQ-xew6-CW1c-6B5jrC

Code:
root@ms-200-prox02:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                Ip1EWt-5xUH-uVUg-zLfF-vVUq-WkPk-QATPEx
  LV Write Access        read/write
  LV Creation host, time proxmox, 2015-10-22 13:50:22 +0100
  LV Status              available
  # open                 2
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1


  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                Gef2ZJ-t2Uy-Uj9m-OdYI-zfCs-gLJf-oogQ6o
  LV Write Access        read/write
  LV Creation host, time proxmox, 2015-10-22 13:50:22 +0100
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0


  --- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID                eOLsYL-H9ke-qiOO-T5Oe-sCo2-I6zD-eYxKaV
  LV Write Access        read/write
  LV Creation host, time proxmox, 2015-10-22 13:50:22 +0100
  LV Status              available
  # open                 1
  LV Size                430.62 GiB
  Current LE             110240
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
 

Attachments

  • Storage.PNG
    Storage.PNG
    128 KB · Views: 17
Seems you do not even added an iSCSI type storage? The idea is that you first define an iSCSI type storage, then use that storage as base storage when you define the LVM storage.
 
Hmm.... I tried to add iSCSI storage but it doesn't see any targets through the gui:
add-iscsi.PNG

but it does through cli:
Code:
root@xxx:~# iscsiadm -m discovery -t st -p 10.4.130.60
10.4.130.60:3260,1 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
10.4.131.60:3260,1 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
10.4.130.61:3260,2 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
10.4.131.61:3260,2 iqn.1984-05.com.dell:powervault.6001ec9000df04ae00000000486fb                                13d
 
Could it be something to do with multipath? It is working on node 1 but I do get some errors:

Code:
root@ms-200-prox01:~# multipath -ll
Oct 26 13:15:14 | multipath.conf +21, invalid keyword: mulitpath
Oct 26 13:15:14 | multipath.conf +22, invalid keyword: device
Oct 26 13:15:14 | multipath.conf +23, invalid keyword: vendor
Oct 26 13:15:14 | multipath.conf +24, invalid keyword: product
Oct 26 13:15:14 | multipath.conf +26, invalid keyword: }
mpathd (36001ec9000def617000009e555b1ae89) dm-4 DELL,MD3000i
size=3.4T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=6 status=active
| |- 5:0:0:2 sdh 8:112 active ready running
| `- 6:0:0:2 sdi 8:128 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  |- 3:0:0:2 sdg 8:96  active ghost running
  `- 4:0:0:2 sdf 8:80  active ghost running
mpathc (36001ec9000df04ae00000dba55b19f20) dm-3 DELL,MD3000i
size=2.2T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=6 status=active
| |- 3:0:0:1 sdc 8:32  active ready running
| `- 4:0:0:1 sdb 8:16  active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  |- 5:0:0:1 sdd 8:48  active ghost running
  `- 6:0:0:1 sde 8:64  active ghost running

Whereas on node 2 with exactly the same multipath.conf I just get:

Code:
root@ms-200-prox02:~# multipath -ll
Oct 26 13:14:15 | multipath.conf +21, invalid keyword: mulitpath
Oct 26 13:14:15 | multipath.conf +22, invalid keyword: device
Oct 26 13:14:15 | multipath.conf +23, invalid keyword: vendor
Oct 26 13:14:15 | multipath.conf +24, invalid keyword: product
Oct 26 13:14:15 | multipath.conf +26, invalid keyword: }
root@ms-200-prox02:~#
 
Both version 4. Only difference is that node 1 was upgraded from Beta.

Node 1:
Code:
[COLOR=#000000][FONT=monospace]proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)pve-kernel-3.19.8-1-pve: 3.19.8-3pve-kernel-4.1.3-1-pve: 4.1.3-7pve-kernel-4.2.0-1-pve: 4.2.0-10pve-kernel-4.2.2-1-pve: 4.2.2-16lvm2: 2.02.116-pve1corosync-pve: 2.3.5-1libqb0: 0.17.2-1pve-cluster: 4.0-23qemu-server: 4.0-31pve-firmware: 1.1-7libpve-common-perl: 4.0-32libpve-access-control: 4.0-9libpve-storage-perl: 4.0-27pve-libspice-server1: 0.12.5-1vncterm: 1.2-1pve-qemu-kvm: 2.4-10pve-container: 1.0-10pve-firewall: 2.0-12pve-ha-manager: 1.0-10ksm-control-daemon: 1.2-1glusterfs-client: 3.5.2-2+deb8u1lxc-pve: 1.1.3-1lxcfs: 0.9-pve2cgmanager: 0.37-pve2criu: 1.6.0-1zfsutils: 0.6.5-pve4~jessie[/FONT][/COLOR]

Node 2:
Code:
[COLOR=#000000][FONT=monospace]proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)pve-kernel-4.2.2-1-pve: 4.2.2-16lvm2: 2.02.116-pve1corosync-pve: 2.3.5-1libqb0: 0.17.2-1pve-cluster: 4.0-22qemu-server: 4.0-30pve-firmware: 1.1-7libpve-common-perl: 4.0-29libpve-access-control: 4.0-9libpve-storage-perl: 4.0-25pve-libspice-server1: 0.12.5-1vncterm: 1.2-1pve-qemu-kvm: 2.4-9pve-container: 1.0-6pve-firewall: 2.0-12pve-ha-manager: 1.0-9ksm-control-daemon: 1.2-1glusterfs-client: 3.5.2-2+deb8u1lxc-pve: 1.1.3-1lxcfs: 0.9-pve2cgmanager: 0.37-pve2criu: 1.6.0-1zfsutils: 0.6.5-pve4~jessie[/FONT][/COLOR]
 
Both version 4. Only difference is that node 1 was upgraded from Beta.

So what is wrong with your multipath config. Either it is wrong one both nodes, or you use different versions of multipath tools?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!