Can't Access Storage

Grunt

Member
Sep 6, 2022
29
5
8
This is a weird one. I upgraded to Proxmox 8.2 the other day and now one of my nodes (2 node setup) kind of can't access any storage. I use ZFS over iSCSI based storage for the bulk of my guests, but I also have a couple local LVMs and Directories setup in storage. All the guests can be started on this host and I can migrate guests freely between the two nodes. However, I cannot create new guests on this host because the storage dropdown appears empty.

1714286761948.png

Clicking on any of the storage types under the host on the left shows 'loading' under summary and eventually says 'communication failure (0)'. However, if I click on VM Disks, it displays them. This behavior is present on ALL storage options on just this node.

1714286876165.png

1714286897943.png

I have rebooted the node, brought down the cluster (shutdown both hosts), removed the storage from the node and readded it, removed the storage from the cluster and re-added it. None of this works. Any ideas on how to resolve this?
 
Sure looks weird.

I assume you've checked all network configurations on the updated node - no new names for NICs etc. Bridges/Addresses etc. all in order. If you haven't - do so.

Next check if update(s) actually are correctly installed.
What does pveversion -v show? (Post in code tags if possible).

Edit: 1 other thing. You mention its a 2 node cluster. What is happening concerning quorum- maintenance?
 
Last edited:
Yes, I've checked my network configs and everything appears to be working fine. The fact that I can see VM disks on the remote storage also lends evidence to this.

Code:
root@proxve1:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-2-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-4
proxmox-kernel-6.8: 6.8.4-2
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.0.11
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.5
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
 
There are some updates available, maybe do that through GUI or
Code:
apt-get update
apt-get dist-upgrade

Also what is your storage.cfg looking like on both nodes
 
Hi,
did you clear your browser cache after the upgrade? What is the output of pvesm status? Does it return quickly? Can you see anything interesting in the system logs/journal?
 
There are some updates available, maybe do that through GUI or
Code:
apt-get update
apt-get dist-upgrade

Also what is your storage.cfg looking like on both nodes
Updated, no change.

Code:
root@proxve1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,iso,vztmpl
        shared 0


lvmthin: Containers
        thinpool Containers
        vgname Containers
        content images,rootdir
        nodes proxve1


dir: local1
        path /mnt/pve/local1
        content iso,rootdir,images,snippets,vztmpl
        is_mountpoint 1
        nodes proxve1
        shared 0


zfs: TrueNAS
        blocksize 8k
        iscsiprovider freenas
        pool VMPool
        portal 10.1.0.253
        target iqn.2005-10.org.freenas.ctl:vmpool
        content images
        freenas_apiv4_host 10.1.0.253
        freenas_password redacted
        freenas_use_ssl 0
        freenas_user root
        nowritecache 0
        sparse 0


pbs: PBS
        datastore Backup
        server pbs.gruntlabs.net
        content backup
        fingerprint redacted
        prune-backups keep-all=1
        username redacted


nfs: Backup
        export /mnt/Backup
        path /mnt/pve/Backup
        server 10.1.0.253
        content iso,vztmpl,snippets,images,rootdir
        options vers=4.2
        prune-backups keep-all=1


root@proxve1:~#


Hi,
did you clear your browser cache after the upgrade? What is the output of pvesm status? Does it return quickly? Can you see anything interesting in the system logs/journal?

Code:
root@proxve1:~# pvesm status
  WARNING: VG name ubuntu-vg is used by VGs MbKFgV-PoWc-QiFE-eGNS-kdlg-IpmH-OFP4Xx and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs Tz2zYa-39Ak-J7ke-tviy-Ll2T-0ISa-AR9Qc5 and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name pve is used by VGs 9KVerH-3XZY-1kVA-DZxD-fhAr-0bcK-y9rDHH and 0Ps7AS-FlX0-MXUv-T7Bf-1YsQ-s3Pj-q2miVK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs v7tn1u-8t4W-dDKi-T9Xt-hIrk-xmWC-jxmkkP and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs Ml8iij-FOa3-WHsr-Rxlu-kCHy-u6Rt-ivePgw and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs r3pOf8-tdaR-8som-2AlP-3TAu-fzj0-7pKsPF and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs YCSZsn-oNPZ-mPjs-rVqD-hhWE-pbvV-oR7OOo and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs tGsFST-MKdd-1wb2-MBig-ZyOJ-f8ym-6JvODX and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs 21icZx-PWpK-i0Zx-XwIM-omLs-ZaDo-glfvXU and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $size in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $free in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
Use of uninitialized value $lvcount in int at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 133.
  WARNING: VG name ubuntu-vg is used by VGs MbKFgV-PoWc-QiFE-eGNS-kdlg-IpmH-OFP4Xx and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs Tz2zYa-39Ak-J7ke-tviy-Ll2T-0ISa-AR9Qc5 and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name pve is used by VGs 9KVerH-3XZY-1kVA-DZxD-fhAr-0bcK-y9rDHH and 0Ps7AS-FlX0-MXUv-T7Bf-1YsQ-s3Pj-q2miVK.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs v7tn1u-8t4W-dDKi-T9Xt-hIrk-xmWC-jxmkkP and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs Ml8iij-FOa3-WHsr-Rxlu-kCHy-u6Rt-ivePgw and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs r3pOf8-tdaR-8som-2AlP-3TAu-fzj0-7pKsPF and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs YCSZsn-oNPZ-mPjs-rVqD-hhWE-pbvV-oR7OOo and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs tGsFST-MKdd-1wb2-MBig-ZyOJ-f8ym-6JvODX and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  WARNING: VG name ubuntu-vg is used by VGs 21icZx-PWpK-i0Zx-XwIM-omLs-ZaDo-glfvXU and oSGAad-Hf2n-pP4m-JDyV-OCrr-G3P6-eTuOcl.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Name               Type     Status           Total            Used       Available        %
Backup              nfs     active      1885851904       225841664      1660010240   11.98%
Containers2     lvmthin     active       478486528       126894627       351591900   26.52%
PBS                 pbs     active      1584206496       109502380      1394157148    6.91%
TrueNAS             zfs     active      1349386240       996534288       352851952   73.85%
local               dir     active        69706992        34325020        31795312   49.24%
local1              dir     active       479596204             608       455159936    0.00%
root@proxve1:~#
 
I have the same problem of communication failure.
Not yet solved.
Here is the content of storage.cfg:
cat storage.cfg
dir: local
path /var/lib/vz
content vztmpl,backup,iso

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvm: pve01-1T
vgname pve01-1T
content images,rootdir
nodes pve01
shared 0

lvm: pve00-1T
vgname pve00-1T
content images,rootdir
nodes pve00
shared 0

here is the output of pvesm status:

On node pve00
root@pve00:/etc/pve# pvesm status
Name Type Status Total Used Available %
local dir active 100614144 19886952 80727192 19.77%
local-lvm lvmthin active 335130624 0 335130624 0.00%
pve00-1T lvm active 976760832 186654720 790106112 19.11%
pve01-1T lvm disabled 0 0 0 N/A


On node pve01
root@pve01:~# pvesm status
Name Type Status Total Used Available %
local dir active 100614144 3538096 97076048 3.52%
local-lvm lvmthin active 335130624 0 335130624 0.00%
pve00-1T lvm disabled 0 0 0 N/A
pve01-1T lvm active 976760832 33554432 943206400 3.44%

The problem could be the "state disabled " due to a configuration problem maybe.
Can someone help me ?
Thanks
 
Hi,
I have the same problem of communication failure.
Not yet solved.
Here is the content of storage.cfg:
cat storage.cfg
dir: local
path /var/lib/vz
content vztmpl,backup,iso

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

lvm: pve01-1T
vgname pve01-1T
content images,rootdir
nodes pve01
shared 0

lvm: pve00-1T
vgname pve00-1T
content images,rootdir
nodes pve00
shared 0

here is the output of pvesm status:

On node pve00
root@pve00:/etc/pve# pvesm status
Name Type Status Total Used Available %
local dir active 100614144 19886952 80727192 19.77%
local-lvm lvmthin active 335130624 0 335130624 0.00%
pve00-1T lvm active 976760832 186654720 790106112 19.11%
pve01-1T lvm disabled 0 0 0 N/A


On node pve01
root@pve01:~# pvesm status
Name Type Status Total Used Available %
local dir active 100614144 3538096 97076048 3.52%
local-lvm lvmthin active 335130624 0 335130624 0.00%
pve00-1T lvm disabled 0 0 0 N/A
pve01-1T lvm active 976760832 33554432 943206400 3.44%

The problem could be the "state disabled " due to a configuration problem maybe.
Can someone help me ?
Thanks
please check the system logs/journal on both nodes. When/where exactly does the error message appear for you? Can you ping between the nodes? What is the status of the cluster, i.e. pvecm status? Did you clear the browser cache/try with a different browser?
 
On node pve01 I run :
root@pve01:~# pvesm status
Name Type Status Total Used Available %
local dir active 100614144 3538552 97075592 3.52%
local-lvm lvmthin active 335130624 0 335130624 0.00%
pve00-1T lvm disabled 0 0 0 N/A
pve01-1T lvm active 976760832 33554432 943206400 3.44%

================

Here are the commands you have suggested to run:

root@pve01:~# pvecm status
Cluster information
-------------------
Name: US01
Config Version: 2
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Mon Dec 16 08:02:22 2024
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1.50
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate WaitForAll

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 182.18.0.227
0x00000002 1 182.18.0.228 (local)



On node pve01 I run the ping command to the other node:
root@pve01:~# ping pve00
PING pve00.uniplan.it (182.18.0.227) 56(84) bytes of data.
64 bytes from pve00.uniplan.it (182.18.0.227): icmp_seq=1 ttl=64 time=0.176 ms
64 bytes from pve00.uniplan.it (182.18.0.227): icmp_seq=2 ttl=64 time=0.191 ms
64 bytes from pve00.uniplan.it (182.18.0.227): icmp_seq=3 ttl=64 time=0.249 ms
64 bytes from pve00.uniplan.it (182.18.0.227): icmp_seq=4 ttl=64 time=0.176 ms
64 bytes from pve00.uniplan.it (182.18.0.227): icmp_seq=5 ttl=64 time=0.179 ms




On node pve00 I run the same commands :

root@pve00:~# pvecm status
Cluster information
-------------------
Name: US01
Config Version: 2
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Mon Dec 16 08:10:41 2024
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.50
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate WaitForAll

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 182.18.0.227 (local)
0x00000002 1 182.18.0.228



root@pve00:~# ping pve01
PING pve01.uniplan.it (182.18.0.228) 56(84) bytes of data.
64 bytes from pve01.uniplan.it (182.18.0.228): icmp_seq=1 ttl=64 time=0.171 ms
64 bytes from pve01.uniplan.it (182.18.0.228): icmp_seq=2 ttl=64 time=0.115 ms
64 bytes from pve01.uniplan.it (182.18.0.228): icmp_seq=3 ttl=64 time=0.186 ms
64 bytes from pve01.uniplan.it (182.18.0.228): icmp_seq=4 ttl=64 time=0.173 ms



I also cleared the cache of the browser and I also used another browser the result is always the same I get what you see in attachement: communication failure.

Any other suggestion.
Thanks
 

Attachments

  • ProxMox_communication_failure.jpg
    ProxMox_communication_failure.jpg
    119.5 KB · Views: 3
Any other suggestion.
If you login to the pve01 node GUI directly (I believe that to be 182.18.0.228:8006) can you access all its local storage etc. What about pve00's storage from that node?

I notice your version is outdated. How about the other node's version. You may want to update both.