GUI can't browse/use storages

Spiros Papageorgiou

Well-Known Member
Aug 1, 2017
82
0
46
43
Hi all,

I have a 3 node cluster and one of the nodes, can not browse the storage (show info, see files) from the gui. The storage underneath is fine. From the shell, I can see the volumes (LVM), read files and the VMs are fine.
The result is that I can not add a disk to a VM because the GUI can't see/browse the storages. The GUI can't even see the local storages (local,local-lvm).

When I check the processes I can see this, which is not normal:
1319032 root 20 0 64772 35468 4132 R 100. 0.0 0:43.47 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_na
1318964 root 20 0 66140 37048 4352 R 99.6 0.0 0:52.42 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_na
1318711 root 20 0 81308 52216 4276 R 99.6 0.0 1:35.81 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_na
1319010 root 20 0 65996 36768 4080 R 99.6 0.0 0:48.98 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix

I tried to restart pvestatd but with no results.

How can i resolve this?

Thanx,
Sp
 
* are the storages local to the box or is this a shared LVM (iSCSI, FC, external SAS enclosure)?
* check your `dmesg` and `journalctl -r` for potential hints
* please post:
** `cat /etc/pve/storage.cfg`
** `pvs`
** `vgs`
** `lvs -a`
 
Hi Stoiko,

I am attaching my storage.cfg file. Let me give you, a little bit more detail about this.
While the situation was ongoing, at first I was able to do "lvdisplay". After sometime, the situation got worse and I could not do lvdisplay any more and the cluster stopped receiving info about my troubled server (in the GUI every icon for the server/vm was a '?').
After some more time, the VMs stopped working (they worked before this).

I could not migrate VMs (even from CLI) and I had to reboot. The same thing happened on the second server, exactly the same way and I also had to reboot it. The server did not loose at any time the FC mounted storage and all paths were up and active.

My setup has many things, but my VMs are on a shared LVM volume (vol1) which resides on a FC connected storage and I have configured multipathd, to provide the multipathed device.

Anyway, it seems that something that proxmox does a query for, hangs forever and the GUI can't go on, because of this. I believe proxmox software should better handle this kind of errors and provide informative feedback.

I had a first look at the logs and didn't see any logs that indicated a problem or something useful. I'll check again though.

The warnings that you see in the lvs, are for volume "azurevol" which is not mounted on this server but presented by the storage.

Regards,
Sp


root@hs1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,iso

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

lvm: vol1
vgname volgrp1-3par
content images,rootdir
shared 1

nfs: warehouse
export /mnt/backup/barphone
path /mnt/pve/warehouse
server x.x.x.x
content rootdir,iso,images,backup,vztmpl
maxfiles 4
options vers=3

nfs: useful
export /mnt/backup/useful
path /mnt/pve/useful
server x.x.x.x
content iso
options vers=3

nfs: nfsbackup
export /mnt/backup/vms/hs
path /mnt/pve/nfsbackup
server x.x.x.x
content images,rootdir,vztmpl,backup
maxfiles 10
options vers=3

lvm: vol2ssd
vgname volssd1-3par
content images,rootdir
shared 1

lvm: azurevol
vgname volazure
content images,rootdir
nodes hs3
shared 0

rbd: ssdr1
content images
krbd 0
pool ssd_r1



============
lvs -a
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdg1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdk1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdo1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 143.54g 1.42 1.14
[data_tdata] pve Twi-ao---- 143.54g
[data_tmeta] pve ewi-ao---- 72.00m
[lvol0_pmspare] pve ewi------- 72.00m
root pve -wi-ao---- 55.75g
swap pve -wi-ao---- 8.00g
vm-102-disk-1 pve Vwi-a-tz-- 8.00g data 18.36
vm-107-disk-1 pve Vwi-aotz-- 4.00g data 14.08
vm-118-cloudinit pve Vwi-a-tz-- 8.00m data 4.69
vm-9000-cloudinit pve Vwi-a-tz-- 8.00m data 0.00
test volgrp1-3par -wi-a----- 20.00g
vm-100-disk-1 volgrp1-3par -wi-a----- 32.00g
vm-101-disk-1 volgrp1-3par -wi-a----- 60.00g
vm-103-disk-1 volgrp1-3par -wi-a----- 32.00g
vm-103-disk-2 volgrp1-3par -wi-a----- 20.00g
vm-104-disk-1 volgrp1-3par -wi-ao---- 64.00g
vm-105-disk-1 volgrp1-3par -wi-a----- 20.00g
vm-106-disk-1 volgrp1-3par -wi-a----- 100.00g
vm-108-disk-1 volgrp1-3par -wi-a----- 60.00g
vm-109-disk-1 volgrp1-3par -wi-a----- 60.00g
vm-110-disk-1 volgrp1-3par -wi-a----- 60.00g
vm-111-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-112-disk-1 volgrp1-3par -wi-a----- 115.00g
vm-113-disk-1 volgrp1-3par -wi-a----- 115.00g
vm-114-disk-1 volgrp1-3par -wi-a----- 115.00g
vm-115-disk-1 volgrp1-3par -wi-ao---- 115.00g
vm-116-disk-1 volgrp1-3par -wi-ao---- 115.00g
vm-117-disk-1 volgrp1-3par -wi-ao---- 50.00g
vm-118-disk-1 volgrp1-3par -wi-a----- 12.20g
vm-121-disk-1 volgrp1-3par -wi-a----- 40.00g
vm-122-disk-1 volgrp1-3par -wi-a----- 40.00g
vm-124-disk-1 volgrp1-3par -wi-a----- 20.00g
vm-124-disk-2 volgrp1-3par -wi-a----- 32.00g
vm-125-disk-1 volgrp1-3par -wi-a----- 40.00g
vm-126-disk-1 volgrp1-3par -wi-a----- 8.00g
vm-127-disk-1 volgrp1-3par -wi-ao---- 32.00g
vm-128-disk-1 volgrp1-3par -wi-a----- 32.00g
vm-129-disk-1 volgrp1-3par -wi-a----- 32.00g
vm-130-disk-1 volgrp1-3par -wi-a----- 32.00g
vm-131-disk-1 volgrp1-3par -wi-a----- 2.00g
vm-132-disk-1 volgrp1-3par -wi-a----- 8.00g
vm-133-disk-1 volgrp1-3par -wi-a----- 40.00g
vm-134-disk-1 volgrp1-3par -wi-a----- 40.00g
vm-135-disk-1 volgrp1-3par -wi-a----- 20.00g
vm-135-disk-2 volgrp1-3par -wi-a----- 32.00g
vm-136-disk-0 volgrp1-3par -wi-a----- 80.00g
vm-601-disk-1 volgrp1-3par -wi-a----- 60.00g
vm-602-disk-1 volgrp1-3par -wi-a----- 60.00g
vm-9000-disk-1 volgrp1-3par -wi-a----- 2.20g
vm-9990-disk-1 volgrp1-3par -wi-a----- 8.00g
vm-9992-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9993-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9994-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9995-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9996-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9997-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9998-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-9999-disk-1 volgrp1-3par -wi-a----- 10.00g
vm-117-disk-1 volssd1-3par -wi-ao---- 50.00g
vm-119-disk-1 volssd1-3par -wi-a----- 32.00g
root@hs1:~#

root@hs1:~# vgs
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdg1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdk1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdo1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
VG #PV #LV #SN Attr VSize VFree
pve 1 7 0 wz--n- 223.28g 15.85g
volazure 1 0 0 wz--n- 1023.95g 1023.95g
volgrp1-3par 1 48 0 wz--n- 1.95t 84.55g
volssd1-3par 1 2 0 wz--n- 1023.95g 941.95g
root@hs1:~# multipath -ll
mpath0 (360002ac0000000000000000300020fdf) dm-0 3PARdata,VV
size=5.9T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:0 sdf 8:80 active ready running
|- 3:0:0:0 sdn 8:208 active ready running
|- 1:0:1:0 sdj 8:144 active ready running
`- 3:0:1:0 sdr 65:16 active ready running
mpathhc2 (360002ac0000000000000000900020fdf) dm-52 3PARdata,VV
size=2.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:2 sdh 8:112 active ready running
|- 3:0:0:2 sdp 8:240 active ready running
|- 1:0:1:2 sdl 8:176 active ready running
`- 3:0:1:2 sdt 65:48 active ready running
mpathhcssd (360002ac0000000000000001100020fdf) dm-53 3PARdata,VV
size=1.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:4 sdi 8:128 active ready running
|- 3:0:0:4 sdq 65:0 active ready running
|- 1:0:1:4 sdm 8:192 active ready running
`- 3:0:1:4 sdu 65:64 active ready running

=============================================================
root@hs1:~# pvs
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdg1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdk1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH on /dev/sdo1 was already found on /dev/sds1.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
WARNING: PV DuxwXY-O1i7-fRja-1Zlt-6NW0-Orn6-rczcZH prefers device /dev/sds1 because device was seen first.
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0-part1 volgrp1-3par lvm2 a-- 1.95t 84.55g
/dev/mapper/mpathhcssd-part1 volssd1-3par lvm2 a-- 1023.95g 941.95g
/dev/sdd3 pve lvm2 a-- 223.28g 15.85g
/dev/sds1 volazure lvm2 a-- 1023.95g 1023.95g
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!