VM Disks are not accessible through WebGUI, pvesm and for backups after upgrade to PVE7

zolten

Active Member
Sep 14, 2018
2
0
41
50
Hi everyone,

I have updated one of our servers to the PVE7.
We are using GlusterFS storage (on ZFS) for our VMs. GlusterFS volumes after upgrade seems to be OK – all gluster peers and volumes are online and there are no healing errors.
However, after upgrade to PVE7 I’ve noticed that automatic backup job failed with following errors:
Code:
Use of uninitialized value $used in pattern match (m//) at /usr/share/perl5/PVE/Storage/Plugin.pm line 844.
Use of uninitialized value $used in concatenation (.) or string at /usr/share/perl5/PVE/Storage/Plugin.pm line 844.
ERROR: Backup of VM 502 failed - no such volume 'GlusterVol_VMStorage:502/vm-502-disk-1.raw'

When I’m navigating in Web GUI on this node to our storage and selecting “VM Disks”, no disks are shown - only “used '' not an integer (500)” message is shown.
ISO Images on the same GlusterFS volume are shown in Web GUI without errors.
Also, while I'm connected to the WebGUI on the same node, If I navigate to the same GlusterFS volume on other nodes (which are still on PVE6), all disks are shown.

pvesm list GlusterVol_VMStorage returns following errors:
Code:
Use of uninitialized value $used in pattern match (m//) at /usr/share/perl5/PVE/Storage/Plugin.pm line 844.
Use of uninitialized value $used in concatenation (.) or string at /usr/share/perl5/PVE/Storage/Plugin.pm line 844.
used '' not an integer

However, I can see all disks through command line.
Also, I can start and stop this VM. It can be migrated to other node and back to the same node.

Any thoughts?

pveversion -v:
Code:
pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.4.106-1-pve)
pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3)
pve-kernel-5.11: 7.0-5
pve-kernel-helper: 7.0-5
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-4.15: 5.4-8
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.3-1
ifupdown: 0.8.36
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.7-1
proxmox-backup-file-restore: 2.0.7-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-5
pve-cluster: 7.0-3
pve-container: 4.0-8
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-10
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1

configuration of GlusterFS volume in /etc/pve/storage.cfg:
Code:
glusterfs: GlusterVol_VMStorage
        path /mnt/pve/GlusterVol_VMStorage
        volume GlusterVol_VMStorage
        content vztmpl,iso,images
        server 10.10.10.1
        server2 10.10.10.2

Configuration of the VM:
Code:
agent: 1,fstrim_cloned_disks=1
bootdisk: scsi0
cores: 20
cpu: IvyBridge
ide2: none,media=cdrom
memory: 81920
name: Bird
net0: virtio=3A:0C:BC:8F:7F:BA,bridge=vmbr1
net1: virtio=EE:5E:8E:DD:C4:58,bridge=vmbr6
net2: virtio=56:FC:BD:68:EC:C1,bridge=vmbr7
numa: 0
ostype: l26
scsi0: GlusterVol_VMStorage:502/vm-502-disk-0.raw,aio=threads,size=32G
scsi1: GlusterVol_VMStorage:502/vm-502-disk-1.raw,aio=threads,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=74e9a4dd-228e-479b-87a8-8379d885f56f
sockets: 2
startup: order=4,up=120
tablet: 0
vmgenid: f3f0635c-dcc4-45dd-ad09-2758336bada6
 

Attachments

  • GUI_error.png
    GUI_error.png
    13 KB · Views: 0
A little update:
Tried cloning virtual machines to that gluster storage - everything ok.
Tried restoring VM from the backup (which are stored in a different volume) to the problematic gluster volume - everything ok.
I also tried shutting down other main gluster node (we use replicated config with 2 nodes + arbiter) to make sure, that gluster is really working ok on the updated server. After that VMs were still running and gluster healed after 2nd server was up.

So it seams like those VM disks are not accessible only thgough Web GUI, pvesm and backup tasks can't read them.
But somehow backups can be restored to that volume.

I would appreciate any help regarding this issue, because I’m getting anxious due to non-working backup jobs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!