Uninitialized value errors in syslog after connecting fuse mounted GDrive storage

WvdW

Renowned Member
Apr 18, 2013
24
1
68
Hi,

After successfully configuring fuse mounted rclone GDrive storage with PM 5.3.11 I started getting hundreds of uninitialized value errors in syslog for pve storage.

pvestatd[1675]: Use of uninitialized value $avail in int at /usr/share/perl5/PVE/Storage.pm line 1158.

Line 1158 deals with establishing the available storage size for a volume but I think it doesn't know how to handle the values returned by fusemount to PM for GDrive with unlimited storage. If you look at the summary for the GDrive storage then it shows the values incorrectly:
Usage 11734.99% (1.00 PiB of 8.73 TiB)
So its basically flipping the values around and showing that the available storage is completely oversubscribed.

This must surely be a bug and should reflect the correct value?

My primary question though is how do I stop the errors in syslog as its sends up to 5 entries per second though?

Thanks.

Werner
 
i assume you configured it as directory storage

what does 'df <path-to-mountpoint>' say?
 
i assume you configured it as directory storage

what does 'df <path-to-mountpoint>' say?

Dominik,
df -h <path-to-mountpoint>:
Size Used Avail
1.0P 8.8T 1.0P

So it shows it correctly there (or as close to correct as it can be with unlimited)
 
ok, could you try the following:

Code:
perl -w -e 'use Filesys::Df; use Data::Dumper; print Dumper Filesys::Df::df("PATH",1);'

where you replace PATH with the path
 
ok, could you try the following:

Code:
perl -w -e 'use Filesys::Df; use Data::Dumper; print Dumper Filesys::Df::df("PATH",1);'

where you replace PATH with the path

$VAR1 = {
'su_bavail' => '1116305286746112',
'bfree' => '1116305286746112',
'blocks' => '1125899906842624',
'user_bavail' => '1125899906842624',
'user_blocks' => '1125899906842624',
'su_blocks' => '1125899906842624',
'user_files' => 1000000000,
'files' => 1000000000,
'user_used' => '9594620096512',
'fused' => 0,
'su_files' => 1000000000,
'bavail' => '1125899906842624',
'fper' => 0,
'user_fused' => 0,
'favail' => 1000000000,
'ffree' => 1000000000,
'used' => '9594620096512',
'su_favail' => 1000000000,
'user_favail' => 1000000000,
'per' => 1
};
 
ok sorry for the back and forth, but i cannot reproduce this here in a fast manner

can you try:

Code:
pvesm status
perl -e 'use PVE::Tools; use Data::Dumper; print Dumper PVE::Tools::df("PATH");
 
ok sorry for the back and forth, but i cannot reproduce this here in a fast manner

can you try:

Code:
pvesm status
perl -e 'use PVE::Tools; use Data::Dumper; print Dumper PVE::Tools::df("PATH");

No problem, I don't mind assisting :)

pvesm status:
Name Type Status Total Used Available %
Store1 dir active 9369515196 1099511627776 0 11734.99%
Store2 dir active 9369746188 1099511627776 0 11734.70%
local dir active 20511312 2065416 17380936 10.07%
local-lvm lvmthin active 951939072 11423268 940515803 1.20%

perl -e 'use PVE::Tools; use Data::Dumper; print Dumper PVE::Tools::df("PATH"); - with PATH replaced with the mount point
Doesn't return any results, just puts you into ready execution mode >
 
perl -e 'use PVE::Tools; use Data::Dumper; print Dumper PVE::Tools::df("PATH"); - with PATH replaced with the mount point
Doesn't return any results, just puts you into ready execution mode >
sorry i missed a ' at the end
 
sorry i missed a ' at the end

:) don't worry i also missed it. i was doing a quick check on why it was failing and didn't even think about the closing '
anyway here is the result:
$VAR1 = {
'avail' => undef,
'total' => '9594620096512',
'used' => '1125899906842624'
};
 
can you post your pveversion -v ?
 
can you post your pveversion -v ?

proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.3-11 (running version: 5.3-11/d4907f84)
pve-kernel-4.15: 5.3-2
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-47
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-12
libpve-storage-perl: 5.0-38
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-23
pve-cluster: 5.0-33
pve-container: 2.0-34
pve-docs: 5.3-3
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-18
pve-firmware: 2.0-6
pve-ha-manager: 2.0-6
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-2
pve-xtermjs: 3.10.1-2
qemu-server: 5.0-47
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
 
ok im baffled...

this
$VAR1 = {
'avail' => undef,
'total' => '9594620096512',
'used' => '1125899906842624'
};
makes no sense

because all we do in PVE::Tools::df is call Filesys::Df
and get 'blocks' (total), 'used' (used), and 'bavail' (avail) from that output and you wrote you get :

$VAR1 = {
'su_bavail' => '1116305286746112',
'bfree' => '1116305286746112',
'blocks' => '1125899906842624',
'user_bavail' => '1125899906842624',
'user_blocks' => '1125899906842624',
'su_blocks' => '1125899906842624',
'user_files' => 1000000000,
'files' => 1000000000,
'user_used' => '9594620096512',
'fused' => 0,
'su_files' => 1000000000,
'bavail' => '1125899906842624',
'fper' => 0,
'user_fused' => 0,
'favail' => 1000000000,
'ffree' => 1000000000,
'used' => '9594620096512',
'su_favail' => 1000000000,
'user_favail' => 1000000000,
'per' => 1
};

which would mean you should get:

{
avail => '1125899906842624',
total => '1125899906842624',
used => '9594620096512',
}
 
ok im baffled...

this

makes no sense

because all we do in PVE::Tools::df is call Filesys::Df
and get 'blocks' (total), 'used' (used), and 'bavail' (avail) from that output and you wrote you get :



which would mean you should get:

{
avail => '1125899906842624',
total => '1125899906842624',
used => '9594620096512',
}

I agree with you but it is what it is. So somewhere in your code the value being fetched for available is wrong or mismatched. All the commands were executed exactly as you gave them and the results accordingly.
 
I have exactly the same error message in the logs but i am using a BeeGFS file system as shared storage (mounted directory). Using the command "qm move_disk" it seems to copy the qcow2 or raw file from nfs to beegfs but it does not actually copy anything over and qm does not show an error message other than complaining that no qemu agent is installed in the VM


root@proxa5:~# df -h /mnt/beegfs
Filesystem Size Used Avail Use% Mounted on
beegfs_nodev 4.7P 2.8P 1.9P 60% /mnt/beegfs


root@proxa5:~# perl -e 'use PVE::Tools; use Data::Dumper; print Dumper PVE::Tools::df("/mnt/beegfs");'
$VAR1 = {
'used' => undef,
'avail' => undef,
'total' => '2109459275972608'
};
root@proxa5:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-21-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-9
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-55
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-34
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
and the first command was :

root@proxa5:~# perl -w -e 'use Filesys::Df; use Data::Dumper; print Dumper Filesys::Df::df("/mnt/beegfs",1);'
$VAR1 = {
'used' => '3092919999266816',
'bfree' => '2109416822800384',
'bavail' => '2109416822800384',
'user_blocks' => '5202336822067200',
'user_bavail' => '2109416822800384',
'user_used' => '3092919999266816',
'per' => 59,
'blocks' => '5202336822067200',
'su_blocks' => '5202336822067200',
'su_bavail' => '2109416822800384'
};
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!