Hi,
In my pve-no-subscription test cluster I noticed the following repeating entries in /var/log/syslog after upgrading pve-manager from 6.3-3 to 6.3-4:
Today I did another upgrade and pve-manager is now 6.3-6
The entries in /var/log/syslog are still coming very regularly. What are they? Are they anything to worry about? Can I do something to make them disappear?
The pve cluster is connected to an external CEPH Octopus cluster, and have only one cephfs filesystem mounted on the hypervisors
Many thanks
Bjørn
In my pve-no-subscription test cluster I noticed the following repeating entries in /var/log/syslog after upgrading pve-manager from 6.3-3 to 6.3-4:
Code:
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $free in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 561.
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $used in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 561.
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $avail in int at /usr/share/perl5/PVE/Storage.pm line 1218.
Mar 17 14:20:07 pve302 pvestatd[2213]: Use of uninitialized value $used in int at /usr/share/perl5/PVE/Storage.pm line 1219.
Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.101-1-pve)
pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f)
pve-kernel-5.4: 6.3-6
pve-kernel-helper: 6.3-6
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph: 15.2.8-pve2
ceph-fuse: 15.2.8-pve2
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ifupdown2: residual config
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-2
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2
Today I did another upgrade and pve-manager is now 6.3-6
Code:
pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.101-1-pve: 5.4.101-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.9-pve1
ceph-fuse: 15.2.9-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-3
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-8
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2
The entries in /var/log/syslog are still coming very regularly. What are they? Are they anything to worry about? Can I do something to make them disappear?

The pve cluster is connected to an external CEPH Octopus cluster, and have only one cephfs filesystem mounted on the hypervisors
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 378G 0 378G 0% /dev
tmpfs 76G 11M 76G 1% /run
/dev/mapper/pve-root 68G 3.4G 62G 6% /
tmpfs 378G 43M 378G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 378G 0 378G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
xx.xxx.xx.xx:3300,xx.xxx.xx.xx:6789,xx.xxx.xx.xx:3300,xx.xxx.xx.xx:6789,xx.xxx.xx.xx:3300,xx.xxx.xx.xxx:6789:/ 26T 480G 25T 2% /mnt/pve/cephfs
tmpfs 76G 0 76G 0% /run/user/0
Many thanks
Bjørn