Lost VGs for LUN after upgrade

NPK

Active Member
Oct 12, 2021
38
2
28
42
Hi,

I upgraded a pve7 cluster to pve8. It's a cluster with SAN storage.

Since upgrade, I can't access to LUN storage, and I can see VGs disappeared from GUI configuration. The command "pvs" returns nothing.

What is wrong? What can I do?

Thanks.
 
Hi,
how is the storage configured? Are there any messages in the system logs/journal?
 
Hi,

Thanks for replying.

Storage :

Code:
dir: local
        path /var/lib/vz
        content images,iso,backup,vztmpl
        shared 0

lvmthin: local-lvm
        disable
        thinpool data
        vgname pve
        content rootdir,images
        nodes pve-mgt-01

lvm: LUN_01
        vgname VG_LUN_PVE_01
        content rootdir,images
        nodes pve-node-03,pve-node-01,pve-node-02,pve-node-04
        shared 1

lvm: LUN_02
        vgname VG_LUN_PVE_02
        content images,rootdir
        nodes pve-node-03,pve-node-01,pve-node-04,pve-node-02
        shared 1

lvm: LUN_03
        vgname VG_LUN_PVE_03
        content images,rootdir
        nodes pve-node-04,pve-node-02,pve-node-03,pve-node-01
        shared 1

zfspool: store-node-04
        pool store-node-04
        content rootdir,images
        mountpoint /store-node-04
        nodes pve-node-04
        sparse 1

zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        mountpoint /rpool/data
        nodes pve-node-04,pve-node-02,pve-node-03,pve-node-01
        sparse 1

zfspool: store-node-01
        pool store-node-01
        content rootdir,images
        mountpoint /store-node-01
        nodes pve-node-01

zfspool: store-node-02
        pool store-node-02
        content images,rootdir
        mountpoint /store-node-02
        nodes pve-node-02

zfspool: store-node-03
        pool store-node-03
        content rootdir,images
        mountpoint /store-node-03
        nodes pve-node-03

lvm: LUN_04
        vgname VG_LUN_PVE_04
        content rootdir,images
        nodes pve-node-03,pve-node-02,pve-node-04,pve-node-01
        shared 1

journalctl -b logfile is attached.

/etc/multipath.conf is still here, /etc/multipath/wwids too. FC are QLogic ISP2432-based.
 

Attachments

It seems you are running kernel 5.15.158-2-pve but Proxmox VE comes with kernels >= 8. Can you post the output of pveversion -v? Did you reboot after the upgrade?

What is the output of the following:
Code:
pvscan
lsblk
multipath -ll
See also https://pve.proxmox.com/wiki/Multipath in particular the Troubleshooting section, maybe something from there helps?
 
I tried with different kernels - in case of kernel cause. No differences.

pvscan : "No matching physical volumes found"
lsblk : only local disks (from sda to sdp)
multipath -ll : nothing.

But I found some error messages before reboot, and 'multipath -ll' on some nodes was saying ""failed faulty running" for SAN disks. I'm afraid about SAN cause...
 

Attachments