[glusterfs][storage] Several storage operations error with regex check

dagservice

New Member
Sep 27, 2023
14
3
3
Since a couple of weeks, i get errors with several things that involve adding, removing or changins storage, like deleting VMs, qm scan and so on.

When this happens, the only two log lines are these:
Code:
Use of uninitialized value $used in pattern match (m//) at /usr/share/perl5/PVE/Storage/Plugin.pm line 964.
Use of uninitialized value $used in concatenation (.) or string at /usr/share/perl5/PVE/Storage/Plugin.pm line 964.
TASK ERROR: used '' not an integer

The storage is use the most is glusterfs, although some VMs use local lvm or local zfs (depending on which node they are running on of course).

I've been looking up the source code, but it's not apparent to me what those regexes are meant to do without going through tons of code. Anyone got a clue?
 
if you find out which volume causes the issue, you could run "qemu-img info --output=json $FILE" on it and post the result here?
 
I did a qm-rescan --vmid until i hit a VM that errored out, so i hope this is/is one of the right one(s):
Here is the output for the (one and only) disk attached to it:
Code:
{
    "children": [
        {
            "name": "file",
            "info": {
                "children": [
                ],
                "virtual-size": 34359738368,
                "filename": "/mnt/pve/NewSystemdisks/images/108/vm-108-disk-0.raw",
                "format": "file",
                "format-specific": {
                    "type": "file",
                    "data": {
                    }
                },
                "dirty-flag": false
            }
        }
    ],
    "virtual-size": 34359738368,
    "filename": "/mnt/pve/NewSystemdisks/images/108/vm-108-disk-0.raw",
    "format": "raw",
    "dirty-flag": false
}
 
yes, that looks promising.

could you also post "pveversion -v" and "stat /mnt/pve/NewSystemdisks/images/108/vm-108-disk-0.raw" output? thanks!
 
Here goes:
Code:
root@proxmox1:~# pveversion -v
proxmox-ve: 8.0.2 (running kernel: 5.11.22-7-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
proxmox-kernel-6.2.16-12-pve: 6.2.16-12
proxmox-kernel-6.2: 6.2.16-12
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-4.4.35-1-pve: 4.4.35-76
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.5
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.8
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.8-2
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-5
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.7
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

Code:
root@proxmox1:~# stat /mnt/pve/NewSystemdisks/images/108/vm-108-disk-0.raw
  File: /mnt/pve/NewSystemdisks/images/108/vm-108-disk-0.raw
  Size: 34359738368     Blocks: 18446744073709549975 IO Block: 131072 regular file
Device: 0,49    Inode: 11783863089747202816  Links: 1
Access: (0600/-rw-------)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2023-09-07 16:19:18.924167436 +0200
Modify: 2023-09-28 09:17:13.636408684 +0200
Change: 2023-09-28 09:17:13.636408684 +0200
 Birth: -

Hope that helps
 
is your glusterfs backed by ZFS by chance? the other thread has that as well, and there is an upstream report about it as well:

https://github.com/gluster/glusterfs/issues/2493

my guess is that gluster internally has some sort of over/underflow issue when the block size is bigger than they expect (512/4k), which makes it return bogus "blocks" numbers..
 
  • Like
Reactions: fiona
Yes it's indeed glusterfs backed by ZFS. I've left ashift to the default value though, since it's all backed by 512k spinning rust. Recordsize is set to 128K. But as i understand it, block sizes in ZFS are variable. I'll leave a note in the upstream report as well.
 
yeah, it would likely go away if you set recordsize to 4k, at the cost of less efficiency on the ZFS side. would also only affect newly written files, not existing ones. if you have a test setup where you can try this variant, it might still be helpful for upstream to confirm.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!