[SOLVED] Adding iSCSI storage to cluster

pmuch

Member
Sep 10, 2020
11
0
21
46
Hello.
I've added IBM v3700 array to proxmox cluster via iSCSI, unchecked use LUN directly, an then created LVM on it. It works OK on the node I did it, but on other nodes shows storage as innacessible, with question mark. It gets available after reboot, but how can I configure this storge on other nodes without rebooting them, or removig the storage and re-adding it from every node's web panel?
 
More details:
/etc/pve/storage.cfg

Code:
iscsi: M1
        portal 172.18.58.41
        target iqn.1986-03.com.ibm:2145.v3700.node2
        content none
        nodes pve1

lvm: lvm-m1
        vgname vg-m1
        content images,rootdir
        shared 1

On the working node:
Code:
   --- Volume group ---
  VG Name               vg-m1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <15,00 TiB
  PE Size               4,00 MiB
  Total PE              3932159
  Alloc PE / Size       0 / 0
  Free  PE / Size       3932159 / <15,00 TiB
  VG UUID               Y6GN0d-0EFU-ujp5-lIo4-UQO9-Hjpz-1sdSgq



  --- Physical volume ---
  PV Name               /dev/sdi
  VG Name               vg-m1
  PV Size               15,00 TiB / not usable 4,00 MiB
  Allocatable           yes
  PE Size               4,00 MiB
  Total PE              3932159
  Free PE               3932159
  Allocated PE          0
  PV UUID               dKww5R-3iBg-Ipmd-3C3N-U1QZ-AznG-UvtYWf


On other nodes it's missing. BUT vgscan shows this VG:
root@pve3:/etc/pve# vgscan Found volume group "vg-m1" using metadata type lvm2


I figured out, that running vgimport -a on all nodes solves the problem, however, it should be done automatically I think.
 
I don't have experience with iscsi/lvm versions, so i only suggest to check, if this type of storage require "shared" flag for cluster-wide.
 
I figured out, that running vgimport -a on all nodes solves the problem, however, it should be done automatically I think.

Thank You pmuch! This solved my problem as well :D

Like You said, `vgscan` was showing secondary hosts could see the LVM VG(and they were splatted with question mark in proxmox gui with "Active No" showing), but it wasn't automatically imported(mounted) on secondary hosts in the cluster at time of creation. It was only automatically imported on the primary cluster host when created.

Simply running `sudo vgimport -a` on the secondary host brought it online and Active immediately with no issues :cool:
 
I ran into a similar scenario where I was using shared volumes from from FlashArray (High perf storage platform from Pure Storage)
It worked fine on single node and when I added a 2nd node to the cluster but then upon adding 3rd, 4th, 5th nodes, the nodes become inaccessible since they were not able to get the iscsi connection up (multipath was configured and nodes could see the vgs)

Finally, I reinstalled proxmox on 5 nodes, configured networks, created iscsi connections on each of them and configured multipath but did not create LVM's on top of it.
Once these 5 nodes were clustered up, only then I created the LVM's and that did the trick
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!