LVM/iSCSI via Cluster GUI - Other nodes not creating PV/VG/LVM after storage.cfg syncs correctly

TC_Tecnet

New Member
Jun 23, 2025
8
0
1
Hi All,

I see others have noted this issue, but it was never clarified if PV/VG/LVM creation on each node should be handled automatically when creating a shared LVM over iSCSI storage at the cluster level. The primary node creates everything and storage.cfg syncs across however the other 2 nodes in the cluster do not create the PV/VG/LVM after the sync.

Is this expected? Perhaps there is some dependency missing since this cluster has been upgraded from older versions. There was nothing error wise in the pve task logs from what I could see.

The iSCSI multipath is configured correctly on each node with identical configuration.

Code:
root@proxmox1:~# multipath -ll
msa_2050_proxmox_vms_a_pool (3600c0ff0003c81a64cef536801000000) dm-26 HPE,MSA 2050 SAN
size=8.6T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:0:4 sdt 65:48 active ready running
| `- 4:0:0:4 sdu 65:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 2:0:0:4 sds 65:32 active ready running
  `- 5:0:0:4 sdv 65:80 active ready running
msa_2050_proxmox_vms_b_pool (3600c0ff0003c80a05b69556801000000) dm-48 HPE,MSA 2050 SAN
size=8.6T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 2:0:0:5 sdo 8:224 active ready running
| `- 5:0:0:5 sdr 65:16 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:0:5 sdp 8:240 active ready running
  `- 4:0:0:5 sdq 65:0  active ready running

Code:
root@proxmox2:~# multipath -ll
msa_2050_proxmox_vms_a_pool (3600c0ff0003c81a64cef536801000000) dm-19 HPE,MSA 2050 SAN
size=8.6T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 2:0:0:4 sds 65:32 active ready running
| `- 5:0:0:4 sdv 65:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:0:4 sdt 65:48 active ready running
  `- 4:0:0:4 sdu 65:64 active ready running
msa_2050_proxmox_vms_b_pool (3600c0ff0003c80a05b69556801000000) dm-61 HPE,MSA 2050 SAN
size=8.6T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:0:5 sdp 8:240 active ready running
| `- 4:0:0:5 sdq 65:0  active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 2:0:0:5 sdo 8:224 active ready running
  `- 5:0:0:5 sdr 65:16 active ready running

Code:
root@proxmox3:~# multipath -ll
msa_2050_proxmox_vms_a_pool (3600c0ff0003c81a64cef536801000000) dm-6 HPE,MSA 2050 SAN
size=8.6T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 4:0:0:4 sdu 65:64 active ready running
| `- 5:0:0:4 sdv 65:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 2:0:0:4 sds 65:32 active ready running
  `- 3:0:0:4 sdt 65:48 active ready running
msa_2050_proxmox_vms_b_pool (3600c0ff0003c80a05b69556801000000) dm-8 HPE,MSA 2050 SAN
size=8.6T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 2:0:0:5 sdo 8:224 active ready running
| `- 3:0:0:5 sdp 8:240 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 4:0:0:5 sdq 65:0  active ready running
  `- 5:0:0:5 sdr 65:16 active ready running

Code:
root@proxmox1:~# pvesm status
Name                      Type     Status           Total            Used       Available        %
LocalStorage               dir     active      2306280676      1200347064       988706708   52.05%
MSA2050                  iscsi     active               0               0               0    0.00%
PBS                        pbs     active      3728442592       876956420      2662016888   23.52%
Proxmox_VMs_A_Pool         lvm     active      9277337600               0      9277337600    0.00%
Proxmox_VMs_B_Pool         lvm     active      9277337600               0      9277337600    0.00%
local                      dir     active        98497780        54059784        39388448   54.88%

Code:
root@proxmox2:~# pvesm status
Name                      Type     Status           Total            Used       Available        %
LocalStorage               dir     active      2306280676      1541473928       647579844   66.84%
MSA2050                  iscsi     active               0               0               0    0.00%
PBS                        pbs     active      3728442592       876956420      2662016888   23.52%
Proxmox_VMs_A_Pool         lvm   inactive               0               0               0    0.00%
Proxmox_VMs_B_Pool         lvm   inactive               0               0               0    0.00%
local                      dir     active        98497780        20654036        72794196   20.97%

Code:
root@proxmox3:~# pvesm status
Name                      Type     Status           Total            Used       Available        %
LocalStorage               dir     active      2306280676       240282808      1948770964   10.42%
MSA2050                  iscsi     active               0               0               0    0.00%
PBS                        pbs     active      3728442592       876956420      2662016888   23.52%
Proxmox_VMs_A_Pool         lvm   inactive               0               0               0    0.00%
Proxmox_VMs_B_Pool         lvm   inactive               0               0               0    0.00%
local                      dir     active        98497780        39278100        54170132   39.88%
 
Hi @TC_Tecnet , welcome to the forum.

The initial (manual) LVM management is done from one node only (any node). The LVM signature is written to the disk and will be seen by all nodes sharing the storage access to the LUN. Going forward, PVE will tightly coordinate LVM management internally to ensure that only one node at a time performs metadata operation (create, delete, etc).

You may find this article helpful: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Edit: looks like "pvscan --cache" was able to fix things without a reboot or manual creation, can ignore the stuff below



Specifically, this is creation of the LVM from GUI rather than creation of the PV/VG on each node then registering it with PVE

pvs and vgs both are missing newly added storage on the other nodes while pvscan and vgscan both show it.

Code:
root@proxmox1:~# pvs
  PV                                      VG                    Fmt  Attr PSize    PFree
  /dev/mapper/msa_2050_proxmox_vms_a_pool vg_proxmox_vms_a_pool lvm2 a--     8.64t  8.64t
  /dev/mapper/msa_2050_proxmox_vms_b_pool vg_proxmox_vms_b_pool lvm2 a--     8.64t  8.64t
  /dev/nvme0n1p3                          pve                   lvm2 a--  <446.07g 16.00g
  /dev/sda                                local_storage         lvm2 a--     2.18t     0
root@proxmox1:~# pvscan
  PV /dev/mapper/msa_2050_proxmox_vms_b_pool   VG vg_proxmox_vms_b_pool   lvm2 [8.64 TiB / 8.64 TiB free]
  PV /dev/mapper/msa_2050_proxmox_vms_a_pool   VG vg_proxmox_vms_a_pool   lvm2 [8.64 TiB / 8.64 TiB free]
  PV /dev/nvme0n1p3                            VG pve                     lvm2 [<446.07 GiB / 16.00 GiB free]
  PV /dev/sda                                  VG local_storage           lvm2 [2.18 TiB / 0    free]

Code:
root@proxmox1:~# vgs
  VG                    #PV #LV #SN Attr   VSize    VFree
  local_storage           1   1   0 wz--n-    2.18t     0
  pve                     1   3   0 wz--n- <446.07g 16.00g
  vg_proxmox_vms_a_pool   1   0   0 wz--n-    8.64t  8.64t
  vg_proxmox_vms_b_pool   1   0   0 wz--n-    8.64t  8.64t
root@proxmox1:~# vgscan
  Found volume group "vg_proxmox_vms_b_pool" using metadata type lvm2
  Found volume group "vg_proxmox_vms_a_pool" using metadata type lvm2
  Found volume group "pve" using metadata type lvm2
  Found volume group "local_storage" using metadata type lvm2


Code:
root@proxmox2:~# pvs
  PV                           VG            Fmt  Attr PSize    PFree
  /dev/nvme0n1p3               pve           lvm2 a--  <446.07g 16.00g
  /dev/sda                     local_storage lvm2 a--     2.18t     0
root@proxmox2:~# pvscan
  PV /dev/mapper/msa_2050_proxmox_vms_b_pool   VG vg_proxmox_vms_b_pool   lvm2 [8.64 TiB / 8.64 TiB free]
  PV /dev/mapper/msa_2050_proxmox_vms_a_pool   VG vg_proxmox_vms_a_pool   lvm2 [8.64 TiB / 8.64 TiB free]
  PV /dev/nvme0n1p3                            VG pve                     lvm2 [<446.07 GiB / 16.00 GiB free]
  PV /dev/sda                                  VG local_storage           lvm2 [2.18 TiB / 0    free]

Code:
root@proxmox2:~# vgs
  VG            #PV #LV #SN Attr   VSize    VFree
  local_storage   1   1   0 wz--n-    2.18t     0
  pve             1   3   0 wz--n- <446.07g 16.00g
root@proxmox2:~# vgscan
  Found volume group "vg_proxmox_vms_b_pool" using metadata type lvm2
  Found volume group "vg_proxmox_vms_a_pool" using metadata type lvm2
  Found volume group "pve" using metadata type lvm2
  Found volume group "local_storage" using metadata type lvm2
 
Last edited: