pvestatd missing metrics in influxdb

MichaelTrip

Member
Jan 5, 2017
8
1
23
35
Hi,

I am trying to write pvestatd to influxdb with the config file /etc/pve/status.cfg

But i am missing certain field / indexes influx in the field.idx file. I only see these indexes:

Code:
d
system   
avail

content

enabled   
total
type
used

shared

While a collegue of mine which has a fresh install of proxmox is seeing this fields in the fields.idx

Code:
        blockstat

user_bavail
fper

blocks
used
fused

user_blocks
per

favail

user_favail
ffree
        su_blocks
                                                                                                                                                                                                              su_files
bfree
        su_bavail
        su_favail

user_files

user_fused
        user_used

bavail
files
▒
cpustat
avg1

iowait
cpu
sum
cpus
idle
user
avg15

system
ctime
nice
used
avg5
wait
j
memory
                                                                                                                                                                                                              swapused
        swaptotal

memfree
        memshared
                                                                                                                                                                                                              memtotal

memused
                                                                                                                                                                                                              swapfree
!
nics

receive
                                                                                                                                                                                                              transmit
▒
system
cpu

content

uptime
type
used

active
total

status
cpus

netout

maxswap
vmid

maxdisk
swap
                                                                                                                                                                                                              diskread
name

maxmem

enabled
pid
        diskwrite
netin
disk

shared
avail
        qmpstatus

Is there a way to reset pvestatd so it creates the correct fields again?
 
  • Like
Reactions: rroethof

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
6,994
824
163
33
Vienna
can you post your 'pveversion -v' ?
 

MichaelTrip

Member
Jan 5, 2017
8
1
23
35
hi,

Here is the output of pveversion -v

Code:
root@pve1:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-5
pve-kernel-helper: 6.1-5
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-12
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-6
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-5
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
6,994
824
163
33
Vienna
what is the output of
Code:
pvesh get /cluster/resources
?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
6,994
824
163
33
Vienna
also: anything suspicious in the logs? (syslog/journal)
 

MichaelTrip

Member
Jan 5, 2017
8
1
23
35
Hi,

Here is the output of pvesh, attached.

No strange messages in either journalctl -f or syslog.
 

Attachments

  • pvsh.txt
    38.7 KB · Views: 12

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
6,994
824
163
33
Vienna
do you have any vm or ct with many disks or nics?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
6,994
824
163
33
Vienna
just a questIon: does your pvestatd run?
what does 'systemctl status pvestatd' show?
 

MichaelTrip

Member
Jan 5, 2017
8
1
23
35
just a questIon: does your pvestatd run?
what does 'systemctl status pvestatd' show?

Hi,

pvestatd is running:

root@pve1:~# systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-02-27 13:33:00 CET; 4 days ago
Process: 21876 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCESS)
Main PID: 21898 (pvestatd)
Tasks: 1 (limit: 9830)
Memory: 66.8M
CGroup: /system.slice/pvestatd.service
└─21898 pvestatd

I will test the patch later today.
 

thheo

Active Member
Nov 17, 2013
130
1
38
Bucharest
Hi,

I have a similar problem, the pvestatd only reports the storages to influxdb:
> show series
key
---
system,host=NFS1,nodename=proxmox,object=storages,type=nfs
system,host=local,nodename=proxmox,object=storages,type=dir
system,host=slow6,nodename=proxmox,object=storages,type=dir
system,host=ssd,nodename=proxmox,object=storages,type=dir
system,host=ssd2,nodename=proxmox,object=storages,type=dir

I did a packet capture on the influxdb server and this is the only data flowing in.
Any ideas?
 

mhaluska

Member
Sep 23, 2018
51
6
13
Same problem here:

Code:
> show series
key
---
system,host=local,nodename=pve1,object=storages,type=dir
system,host=pve-home,nodename=pve1,object=storages,type=nfs
system,host=z1pool-kvm,nodename=pve1,object=storages,type=zfspool
system,host=z1pool-lxc,nodename=pve1,object=storages,type=zfspool

Code:
# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-11 (running version: 6.1-11/f2f18736)
pve-kernel-helper: 6.1-9
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.2
libpve-access-control: 6.0-7
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-1
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-6
pve-cluster: 6.1-8
pve-container: 3.1-4
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.1-1
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-20
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
 

elk

New Member
Jul 23, 2020
4
1
1
42
Same problem here, proxmox is not reporting any data for QEMU VMs, only for the host system. All was fine from installation until 19 July. Then on 19 July the last correct report was done at 23:59 and then it simply stopped reporting VM data. I have not touched the host in any way, except for adding 2-3 VMs. Max disks per VM: 2, max NICs per VM: 1, total bridges: 9 (one bridge per VM, all QEMU). Nothing special in the logs.

If I stop the receiving influxdb, I get the following in the logs:

Code:
Jul 23 13:26:46 s3 pvestatd[9131]: node status update error: metrics send error 'localinflux': failed to send metrics: Connection refused
Jul 23 13:26:46 s3 pvestatd[9131]: qemu status update error: metrics send error 'localinflux': failed to send metrics: Connection refused

To me this means that it is trying to send both node AND qemu status updates, yet somehow only the node updates arrive at influxdb, not the qemu ones. If I replace influx with netcat listening on the same port, it also sees only node updates and no qemu ones.


versions:

Code:
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-11
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-10
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
 

elk

New Member
Jul 23, 2020
4
1
1
42

adrianf

Member
Jan 11, 2020
6
0
6
39
I am having exactly the same issue.
Suddenly, I receive only the "system" measurement on Influxdb side and only with sorage information for an lxc container and not the vms.

I tried completely resetting the influxdb data (all databases, series etc.), change UDP port and pointed status.cfg and influxdb.conf to a new database but nothing helps.

Any ideas?
 

JohnTanner

Member
Sep 25, 2019
38
2
13
32
Did anyone else find a solution?
i have the same issue but pvestatd gives me the following errors and no pfSense or anything similar is deployed:

Bash:
 pvestatd[30492]: lxc status update error: metrics send error 'proxmox': failed to send metrics: Connection refused
 
 pvestatd[30492]: qemu status update error: metrics send error 'proxmox': failed to send metrics: Connection refused
 
# and rarely:
pvestatd[30492]: failed to close '/sys/fs/cgroup/cpuset/lxc/132/ns/cpuset.cpus' - Device or resource busy
 

zargarzadehm

New Member
Jan 15, 2022
1
0
1
23
Did anyone else find a solution?
I have the same issue and don't return in metrics qemu VM data, also pvestatd don't give me any error

Bash:
proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-helper: 6.4-8
pve-kernel-5.4: 6.4-7
pve-kernel-5.4.143-1-pve: 5.4.143-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve1~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!