[SOLVED] pvestatd - Use of uninitialized value

bjsko

Well-Known Member
Sep 25, 2019
30
4
48
Hi,

In my pve-no-subscription test cluster I noticed the following repeating entries in /var/log/syslog after upgrading pve-manager from 6.3-3 to 6.3-4:

Code:
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $free in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 561.
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $used in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 561.
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $avail in int at /usr/share/perl5/PVE/Storage.pm line 1218.
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $used in int at /usr/share/perl5/PVE/Storage.pm line 1219.

Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f)
pve-kernel-5.4: 6.3-6
pve-kernel-helper: 6.3-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph: 15.2.8-pve2
ceph-fuse: 15.2.8-pve2
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ifupdown2: residual config
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-2
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2

Today I did another upgrade and pve-manager is now 6.3-6
Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.9-pve1
ceph-fuse: 15.2.9-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-3
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-8
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2

The entries in /var/log/syslog are still coming very regularly. What are they? Are they anything to worry about? Can I do something to make them disappear? ;)

The pve cluster is connected to an external CEPH Octopus cluster, and have only one cephfs filesystem mounted on the hypervisors
Code:
df -h
Filesystem                                                                                                           Size  Used Avail Use% Mounted on
udev                                                                                                                 378G     0  378G   0% /dev
tmpfs                                                                                                                 76G   11M   76G   1% /run
/dev/mapper/pve-root                                                                                                  68G  3.4G   62G   6% /
tmpfs                                                                                                                378G   43M  378G   1% /dev/shm
tmpfs                                                                                                                5.0M     0  5.0M   0% /run/lock
tmpfs                                                                                                                378G     0  378G   0% /sys/fs/cgroup
/dev/fuse                                                                                                             30M   20K   30M   1% /etc/pve
xx.xxx.xx.xx:3300,xx.xxx.xx.xx:6789,xx.xxx.xx.xx:3300,xx.xxx.xx.xx:6789,xx.xxx.xx.xx:3300,xx.xxx.xx.xxx:6789:/   26T  480G   25T   2% /mnt/pve/cephfs
tmpfs                                                                                                                 76G     0   76G   0% /run/user/0

Many thanks
Bjørn
 
Hi, please find ceph.conf below. It has entries for the mon hosts. Bear in mind that ceph.conf has not been changed at all. The messages in /var/log/syslog appeared after I patched the PVE cluster to 6.3-4.


Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         bluestore_block_db_size = 53687091200
         cluster_network = xx.xx.xx.xx/24
         fsid = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
         mon_allow_pool_delete = true
         mon_host = [v2:xx.xx.xx.xx:3300/0,v1:xx.xx.xx.xx:6789/0] [v2:xx.xx.xx.xx:3300/0,v1:xx.xx.xx.xx:6789/0] [v2:xx.xx.xx.xx:3300/0,v1:xx.xx.xx.xx:6789/0]
         mon_initial_members = server1, server2, server3
         public_network = xx.xx.xx.xx/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[osd]
         debug_filer = "0/0"
         debug_filestore = "0/0"
         ms_dispatch_throttle_bytes = 1048576000
         objecter_inflight_op_bytes = 1048576000
         objecter_inflight_ops = 10240
         osd_disk_threads = 4
         osd_op_queue_cut_off = high
         osd_op_threads = 8
         osd_pg_object_context_cache_count = 1024

[mds.pve302]
         host = pve302
         mds_standby_for_name = pve

[mds.pve303]
         host = pve303
         mds standby for name = pve

[mds.pve301]
         host = pve301
         mds_standby_for_name = pve

Many thanks
Bjørn
 
I haven't been able to figure this out. As it is a test system, I have kept on patching it with everything that is released in the no-subscription repository. The current versions are:
Code:
# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.10-pve1
ceph-fuse: 15.2.10-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.13-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

It addition to the messages in /var/log/syslog, pvesm will also output the same messages:
Code:
# pvesm status
Use of uninitialized value $free in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 561.
Use of uninitialized value $used in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 561.
Use of uninitialized value $avail in int at /usr/share/perl5/PVE/Storage.pm line 1218.
Use of uninitialized value $used in int at /usr/share/perl5/PVE/Storage.pm line 1219.
Name             Type     Status           Total            Used       Available        %
bco               rbd     active     27472020943       923413967     26548606976    3.36%
c3                rbd     active     26604668351        56061375     26548606976    0.21%
cephfs         cephfs     active     27051479040       502874112     26548604928    1.86%
ev                rbd     active               0               0               0    0.00%
ev2               rbd     active     39950670433       127760993     39822909440    0.32%
evtest            rbd     active     26548606976               0     26548606976    0.00%
local             dir     active        71208088         3703976        63843928    5.20%
local-lvm     lvmthin     active       189976576               0       189976576    0.00%
zal               rbd     active     26548607837             861     26548606976    0.00%

I have seen a couple of other threads with similar error messages (https://forum.proxmox.com/threads/u...connecting-fuse-mounted-gdrive-storage.52259/ and https://forum.proxmox.com/threads/bug-in-pve-tools-df-when-adding-petabyte-scale-storage.60090/). But I don't think they are related as the issue mentioned in those threads have been fixed (and in my case it is not petabyte, but terrabyte).

I am not sure if the messages have any actual impact other that filling the logs (the four lines in the logs appear every 10 seconds), but I am a bit reluctant to patch my production environment until I know why the messages suddenly started appearing.

Any help much appreciated!

BR
Bjørn
 
the pool referenced by the 'ev' storage definition likely does not exist. it's a bug in handling that situation that causes the messages, the storage should be treated as inactive.
 
Thank you so much! I didn't spot that one! Verified by setting the "ev" storage as inactive, and of course, the messages disappeared. I have obviously been chasing in the wrong direction here. Thanks also for creating the bug report.
I will mark this thread as solved as further tracking is done in the bug report.

BR
Bjørn