LUN does not appear in my cluste.

Torrazka

New Member
Oct 10, 2023
6
0
1
Hello team! I have a question. Does anyone know how to activate the LUN? Does it appear inactive when I add it to my CLUSTER?
Regards!

1713410162075.png
 
As far as I can see your setup appears normal & correct.

The 2 iSCSI shared LUNs are picked up on all nodes (Pikachuxxxxxx...)

The LVMs of the LUNs on the other hand (LUNPROXMOX01 & 02) can only be accessed by one node at a time.

The reason for this is that iSCSI LVM is a RAW shared storage so it is different from other types of shared storage, since it can only be accessed by one HOST/NODE at a time. There is no FS on it that is cluster-aware that actively maintains the coordination.

PVE will manage/activate the volume (LVM) on its own. So if you migrate/move a VM from one node that uses that volume to another node, the volume will then be activated on the target node & deactivated on the source node.

In short: The PVE backend itself implements proper cluster-wide locking.

In your case, it is interesting that the first LVM called LUNPROXMOX01 shows as active on all nodes, I'm guessing this is because it hasn't yet been actively accessed to by any node, as opposed to the second LVM called LUNPROXMOX02 which probably has already been accessed by your RAICHU node. This probably has to do with the storage types you have chosen in PVE for that LVM. Maybe you added a rootdir type to LUNPROXMOX02 as opposed to LUNPROXMOX01 which you did not. IDK, but its probably something like that. Maybe a reboot will change the situation.

LVM iSCSI sharing is often misunderstood by many. Search these forums.

Do note that LVM storage does not support snapshots.

See also docs.


Please note: I don't use iSCSI - but this is what I've learned in my time using PVE.
 
  • Like
Reactions: Torrazka
As far as I can see your setup appears normal & correct.

The 2 iSCSI shared LUNs are picked up on all nodes (Pikachuxxxxxx...)

The LVMs of the LUNs on the other hand (LUNPROXMOX01 & 02) can only be accessed by one node at a time.

The reason for this is that iSCSI LVM is a RAW shared storage so it is different from other types of shared storage, since it can only be accessed by one HOST/NODE at a time. There is no FS on it that is cluster-aware that actively maintains the coordination.

PVE will manage/activate the volume (LVM) on its own. So if you migrate/move a VM from one node that uses that volume to another node, the volume will then be activated on the target node & deactivated on the source node.

In short: The PVE backend itself implements proper cluster-wide locking.

In your case, it is interesting that the first LVM called LUNPROXMOX01 shows as active on all nodes, I'm guessing this is because it hasn't yet been actively accessed to by any node, as opposed to the second LVM called LUNPROXMOX02 which probably has already been accessed by your RAICHU node. This probably has to do with the storage types you have chosen in PVE for that LVM. Maybe you added a rootdir type to LUNPROXMOX02 as opposed to LUNPROXMOX01 which you did not. IDK, but its probably something like that. Maybe a reboot will change the situation.

LVM iSCSI sharing is often misunderstood by many. Search these forums.

Do note that LVM storage does not support snapshots.

See also docs.


Please note: I don't use iSCSI - but this is what I've learned in my time using PVE.
Thank you very much for the comment and your support, the strange thing is that when I restart the nodes they access the LUN without any problem and that "?" In the nodes it appears as if all the LUNS were active. I thought I would have to give it some command to activate the second LUN in all the proxmox, or at least so that it can be seen as active.

Regards!!
 
Your welcome, as I said PVE backend will handle activation on an "as-needed-policy".
 
PVE will manage/activate the volume (LVM) on its own. So if you migrate/move a VM from one node that uses that volume to another node, the volume will then be activated on the target node & deactivated on the source node.
For me this is also not the case. I tried migrating a VM from the host on which the PG and VG were created to the secondary node (which currently has a question mark for that LVM ? ). The LVM stays inactive on the second node until I reboot it and I find that strange. I want to avoid rebooting nodes across the cluster when I add new storage. This is what happens if I try migrating the VM before rebooting prox03 :

1716381362161.png

Volume group "proxmox-lun03-vg" not found

TASK ERROR: can't activate LV '/dev/proxmox-lun03-vg/vm-100-disk-0': Cannot process volume group proxmox-lun03-vg

Has anyone managed to get around this? Is it a bad config somewhere? If this helps :

root@prox03:~# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/proxmox-lun01 proxmox-lun01-vg lvm2 a-- <1024.00g <1024.00g
/dev/mapper/proxmox-lun02 proxmox-lun02-vg lvm2 a-- <1024.00g <1024.00g
/dev/sda3 pve lvm2 a-- <743.62g 16.00g
root@prox03:~# vgs
VG #PV #LV #SN Attr VSize VFree
proxmox-lun01-vg 1 0 0 wz--n- <1024.00g <1024.00g
proxmox-lun02-vg 1 0 0 wz--n- <1024.00g <1024.00g
pve 1 3 0 wz--n- <743.62g 16.00g
root@prox03:~# pvscan
PV /dev/mapper/proxmox-lun01 VG proxmox-lun01-vg lvm2 [<1024.00 GiB / <1024.00 GiB free]
PV /dev/mapper/proxmox-lun03 VG proxmox-lun03-vg lvm2 [<500.00 GiB / <400.00 GiB free]
PV /dev/mapper/proxmox-lun02 VG proxmox-lun02-vg lvm2 [<1024.00 GiB / <1024.00 GiB free]
PV /dev/sda3 VG pve lvm2 [<743.62 GiB / 16.00 GiB free]
Total: 4 [3.21 TiB] / in use: 4 [3.21 TiB] / in no VG: 0 [0 ]
root@prox03:~# vgscan
Found volume group "proxmox-lun01-vg" using metadata type lvm2
Found volume group "proxmox-lun03-vg" using metadata type lvm2
Found volume group "proxmox-lun02-vg" using metadata type lvm2
Found volume group "pve" using metadata type lvm2
root@prox03:~# lvscan
inactive '/dev/proxmox-lun03-vg/vm-100-disk-0' [100.00 GiB] inherit
ACTIVE '/dev/pve/data' [611.14 GiB] inherit
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit

Under lvscan, vm-100-disk-0 is inactive but I suppose that is correct because it is currently active on the node hosting the VM? When I reboot the node, I get proxmox-lun-03 PV and proxmox-lun-03-VG.

Best regards,

-N
 
Hola a todo el mundo!!! les contesto en español por que somos muy pocos en habla hispana que van a tener este problema.

El problema se resuelve de la siguiente manera:

En el archivo de multipath.conf "agregar" en la parte de "DEFAULTS" el siguiente comando:

polling_interval 10

ademas que cada que agregas una LUN tendras que ejecutar el siguiente comando: multipath -r para que haga un rescan el multipath y puedas "ver la nueva LUN.

Saludos!!! :D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!