[SOLVED] Proxmox 9 no data in the overview

ToniP.

New Member
Jun 2, 2025
9
0
1
I have refreshed my Intel NUC NUC10i7FNB with Proxmox 9.
Current version 9.0.5

I cannot see any values in the host overview:
CPU usage, server load, memory usage, network traffic, CPU pressure stall, IO pressure stall, memory pressure stall
The timeline is set to 1970-01-01

Values are given for the individual VMs
 
Last edited:
root@pve9-nuc:~# pvestatd status
running
root@pve9-nuc:~# journalctl -u pvestatd -b
Aug 13 13:25:47 pve9-nuc systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Aug 13 13:25:48 pve9-nuc pvestatd[1141]: starting server
Aug 13 13:25:48 pve9-nuc systemd[1]: Started pvestatd.service - PVE Status Daemon.
Aug 13 14:49:33 pve9-nuc pvestatd[1141]: storage 'USB' is not online
Aug 13 14:49:38 pve9-nuc pvestatd[1141]: storage 'HDD500' is not online
Aug 13 14:49:43 pve9-nuc pvestatd[1141]: storage 'OMV' is not online
Aug 13 14:49:43 pve9-nuc pvestatd[1141]: status update time (15.259 seconds)
Aug 13 14:49:47 pve9-nuc pvestatd[1141]: storage 'OMV' is not online
Aug 13 14:49:50 pve9-nuc pvestatd[1141]: storage 'HDD500' is not online
Aug 13 14:49:53 pve9-nuc pvestatd[1141]: storage 'USB' is not online
Aug 13 14:49:53 pve9-nuc pvestatd[1141]: status update time (10.541 seconds)
Aug 13 14:49:56 pve9-nuc pvestatd[1141]: storage 'OMV' is not online
Aug 13 14:49:56 pve9-nuc pvestatd[1141]: storage 'HDD500' is not online
Aug 13 14:49:56 pve9-nuc pvestatd[1141]: storage 'USB' is not online
Aug 13 14:50:04 pve9-nuc pvestatd[1141]: storage 'OMV' is not online
Aug 13 14:50:04 pve9-nuc pvestatd[1141]: storage 'HDD500' is not online
Aug 13 14:50:04 pve9-nuc pvestatd[1141]: storage 'USB' is not online
Aug 14 07:27:13 pve9-nuc systemd[1]: Reloading pvestatd.service - PVE Status Daemon...
Aug 14 07:27:14 pve9-nuc pvestatd[236573]: send HUP to 1141
Aug 14 07:27:14 pve9-nuc pvestatd[1141]: received signal HUP
Aug 14 07:27:14 pve9-nuc pvestatd[1141]: server shutdown (restart)
Aug 14 07:27:14 pve9-nuc systemd[1]: Reloaded pvestatd.service - PVE Status Daemon.
Aug 14 07:27:14 pve9-nuc pvestatd[1141]: restarting server
Aug 14 07:52:45 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 14 14:58:25 pve9-nuc pvestatd[1141]: VM 107 qmp command failed - VM 107 not running
Aug 15 07:52:45 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 15 08:31:45 pve9-nuc systemd[1]: Reloading pvestatd.service - PVE Status Daemon...
Aug 15 08:31:45 pve9-nuc pvestatd[561586]: send HUP to 1141
Aug 15 08:31:45 pve9-nuc pvestatd[1141]: received signal HUP
Aug 15 08:31:45 pve9-nuc pvestatd[1141]: server shutdown (restart)
Aug 15 08:31:45 pve9-nuc systemd[1]: Reloaded pvestatd.service - PVE Status Daemon.
Aug 15 08:31:46 pve9-nuc pvestatd[1141]: restarting server
Aug 16 07:52:46 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 17 07:52:48 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 18 07:52:51 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 19 07:52:51 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 20 07:52:52 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 21 07:52:57 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 21 11:49:17 pve9-nuc pvestatd[1141]: modified cpu set for lxc/108: 0-1
Aug 21 11:50:26 pve9-nuc pvestatd[1141]: unable to get PID for CT 108 (not running?)
Aug 22 07:53:01 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 22 15:25:23 pve9-nuc pvestatd[1141]: got timeout
Aug 22 15:25:24 pve9-nuc pvestatd[1141]: unable to activate storage 'OMV' - directory '/mnt/pve/OMV' does not exist or is unreachable
Aug 23 07:53:01 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 24 07:53:05 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 25 07:53:07 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
root@pve9-nuc:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1
 
The log from pvestatd looks okay.
Could you please double check:
- The time on the pve:
Bash:
~# timedatectl status
Make sure, Time is correct, NTP is active and RTC is in sync.

- Ensure the hostname resolves correctly:
Bash:
cat /etc/hosts
- You can also safely attempt to restarted the related services:
Bash:
systemctl restart rrdcached # Collects & stores performance metrics (CPU, memory, etc.)
systemctl restart pvestatd # Updates vm and node statuses, UI may briefly show stale info during the restart
systemctl restart pveproxy # Briefly restarts the web UI, may require browser reload
 
root@pve9-nuc:~# timedatectl status
Local time: Tue 2025-08-26 08:25:26 CEST
Universal time: Tue 2025-08-26 06:25:26 UTC
RTC time: Tue 2025-08-26 06:25:26
Time zone: Europe/Berlin (CEST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
root@pve9-nuc:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.178.84 pve9-nuc.fritz.box pve9-nuc

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
after restart the following, the same error........

systemctl restart rrdcached
systemctl restart pvestatd
systemctl restart pveproxy
 
Thank you for the provided output.
At this point, it seems the issue is due to a connection problem with the storage. I previously didn’t think this would cause the problem, but in fact its highly likely to be the cause.
:
Code:
Aug 22 15:25:24 pve9-nuc pvestatd[1141]: unable to activate storage 'OMV' - directory '/mnt/pve/OMV' does not exist or is unreachable
Please disable any unused storage and see if that helps:
Datacenter -> Storage

In case the problem still persists after disabling the unused storage, please provide the following:
1.
Bash:
~# time pvesm status
This way we can see how much time it take to connect to the storage.
2.
Bash:
~# journalctl --unit=pvestatd -n 50 --no-pager
In order to check the status after the service was restarted.
3.
Bash:
~# cat /etc/pve/storage.cfg
In order to check the storage config.
 
  • Like
Reactions: UdoB
Please disable any unused storage and see if that helps:
Datacenter -> Storage

I disabled All Storage, the same problem. I also reboot, and the same problem


root@pve9-nuc:~# time pvesm status
Name Type Status Total Used Available %
HDD500 cifs disabled 0 0 0 N/A
OMV cifs disabled 0 0 0 N/A
USB cifs disabled 0 0 0 N/A
local dir active 98497780 3658708 89789524 3.71%
local-lvm lvmthin active 832888832 244869316 588019515 29.40%

real 0m0.467s
user 0m0.349s
sys 0m0.057s

root@pve9-nuc:~# journalctl --unit=pvestatd -n 50 --no-pager
Aug 15 08:31:45 pve9-nuc pvestatd[561586]: send HUP to 1141
Aug 15 08:31:45 pve9-nuc pvestatd[1141]: received signal HUP
Aug 15 08:31:45 pve9-nuc pvestatd[1141]: server shutdown (restart)
Aug 15 08:31:45 pve9-nuc systemd[1]: Reloaded pvestatd.service - PVE Status Daemon.
Aug 15 08:31:46 pve9-nuc pvestatd[1141]: restarting server
Aug 16 07:52:46 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 17 07:52:48 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 18 07:52:51 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 19 07:52:51 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 20 07:52:52 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 21 07:52:57 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 21 11:49:17 pve9-nuc pvestatd[1141]: modified cpu set for lxc/108: 0-1
Aug 21 11:50:26 pve9-nuc pvestatd[1141]: unable to get PID for CT 108 (not running?)
Aug 22 07:53:01 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 22 15:25:23 pve9-nuc pvestatd[1141]: got timeout
Aug 22 15:25:24 pve9-nuc pvestatd[1141]: unable to activate storage 'OMV' - directory '/mnt/pve/OMV' does not exist or is unreachable
Aug 23 07:53:01 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 24 07:53:05 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 25 07:53:07 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 26 07:53:08 pve9-nuc pvestatd[1141]: auth key pair too old, rotating..
Aug 26 08:27:47 pve9-nuc systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Aug 26 08:27:47 pve9-nuc pvestatd[1141]: received signal TERM
Aug 26 08:27:47 pve9-nuc pvestatd[1141]: server closing
Aug 26 08:27:47 pve9-nuc pvestatd[1141]: server stopped
Aug 26 08:27:48 pve9-nuc systemd[1]: pvestatd.service: Deactivated successfully.
Aug 26 08:27:48 pve9-nuc systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Aug 26 08:27:48 pve9-nuc systemd[1]: pvestatd.service: Consumed 8h 54.128s CPU time, 238.9M memory peak, 80.8M memory swap peak.
Aug 26 08:27:48 pve9-nuc systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Aug 26 08:27:49 pve9-nuc pvestatd[4028470]: starting server
Aug 26 08:27:49 pve9-nuc systemd[1]: Started pvestatd.service - PVE Status Daemon.
Aug 26 12:30:44 pve9-nuc systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Aug 26 12:30:45 pve9-nuc pvestatd[4028470]: received signal TERM
Aug 26 12:30:45 pve9-nuc pvestatd[4028470]: server closing
Aug 26 12:30:45 pve9-nuc pvestatd[4028470]: server stopped
Aug 26 12:30:46 pve9-nuc systemd[1]: pvestatd.service: Deactivated successfully.
Aug 26 12:30:46 pve9-nuc systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Aug 26 12:30:46 pve9-nuc systemd[1]: pvestatd.service: Consumed 6min 23.553s CPU time, 209.5M memory peak.
Aug 26 12:30:46 pve9-nuc systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Aug 26 12:30:46 pve9-nuc pvestatd[4080919]: starting server
Aug 26 12:30:46 pve9-nuc systemd[1]: Started pvestatd.service - PVE Status Daemon.
Aug 26 12:34:22 pve9-nuc systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
Aug 26 12:34:22 pve9-nuc pvestatd[4080919]: received signal TERM
Aug 26 12:34:22 pve9-nuc pvestatd[4080919]: server closing
Aug 26 12:34:22 pve9-nuc pvestatd[4080919]: server stopped
Aug 26 12:34:23 pve9-nuc systemd[1]: pvestatd.service: Deactivated successfully.
Aug 26 12:34:23 pve9-nuc systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
Aug 26 12:34:23 pve9-nuc systemd[1]: pvestatd.service: Consumed 5.464s CPU time, 207.9M memory peak.
-- Boot 8e48d20f6c4e4f81879acce595a1c202 --
Aug 26 12:35:08 pve9-nuc systemd[1]: Starting pvestatd.service - PVE Status Daemon...
Aug 26 12:35:09 pve9-nuc pvestatd[1127]: starting server
Aug 26 12:35:09 pve9-nuc systemd[1]: Started pvestatd.service - PVE Status Daemon.


root@pve9-nuc:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

cifs: OMV
disable
path /mnt/pve/OMV
server 192.168.178.82
share OMV
content backup,iso
prune-backups keep-all=1
username jke

cifs: HDD500
disable
path /mnt/pve/HDD500
server 192.168.178.82
share 500G
content iso,backup
prune-backups keep-all=1
username jke

cifs: USB
disable
path /mnt/pve/USB
server 192.168.178.82
share USB
content iso,backup
prune-backups keep-all=1
username jke
 
Could you check if the rrddata work through CLI:
Bash:
~# pvesh get nodes/pve9-nuc/rrddata --timeframe hour

Also check and provide the status for rrdcached:
Bash:
~# ls -lh /var/lib/rrdcached/db/pve-node-9.0/
~# systemctl status rrdcached.service
~# journalctl --unit=rrdcached -n 50 --no-pager

Reference: summary-graphs-are-blank
 
Last edited:
root@pve9-nuc:~# pvesh get nodes/pve9-nuc/rrddata --timeframe hour
RRD error: mmaping file '/var/lib/rrdcached/db/pve-node-9.0/pve9-nuc': Invalid argument
root@pve9-nuc:~# ls -lh /var/lib/rrdcached/db/pve-node-9.0/
total 0
-rw-r--r-- 1 root root 0 Aug 13 07:50 pve9-nuc
root@pve9-nuc:~# systemctl status rrdcached.service
● rrdcached.service - Data caching daemon for rrdtool
Loaded: loaded (/usr/lib/systemd/system/rrdcached.service; enabled; preset: enabled)
Active: active (running) since Tue 2025-08-26 12:35:06 CEST; 3h 0min ago
Invocation: 74ffac64ec0e4c33b46a08170671de3d
TriggeredBy: ● rrdcached.socket
Docs: man:rrdcached(1)
Main PID: 796 (rrdcached)
Tasks: 7 (limit: 38054)
Memory: 7.2M (peak: 7.7M)
CPU: 259ms
CGroup: /system.slice/rrdcached.service
└─796 /usr/bin/rrdcached -g

Aug 26 12:35:06 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.



root@pve9-nuc:~# journalctl --unit=rrdcached -n 50 --no-pager
Aug 13 07:52:36 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
Aug 13 07:57:39 pve9-nuc systemd[1]: Stopping rrdcached.service - Data caching daemon for rrdtool...
Aug 13 07:57:40 pve9-nuc systemd[1]: rrdcached.service: Deactivated successfully.
Aug 13 07:57:40 pve9-nuc systemd[1]: Stopped rrdcached.service - Data caching daemon for rrdtool.
-- Boot 27d445a71ca04741a792d34381cf74aa --
Aug 13 07:58:11 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
Aug 13 13:04:26 pve9-nuc systemd[1]: Stopping rrdcached.service - Data caching daemon for rrdtool...
Aug 13 13:04:27 pve9-nuc systemd[1]: rrdcached.service: Deactivated successfully.
Aug 13 13:04:27 pve9-nuc systemd[1]: Stopped rrdcached.service - Data caching daemon for rrdtool.
-- Boot e8bf97aeabb749bab359978897325bc3 --
Aug 13 13:04:56 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
-- Boot 72d10327f78d4ef080c1c9889b7b5c02 --
Aug 13 13:25:45 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
Aug 26 08:27:30 pve9-nuc systemd[1]: Stopping rrdcached.service - Data caching daemon for rrdtool...
Aug 26 08:27:30 pve9-nuc systemd[1]: rrdcached.service: Deactivated successfully.
Aug 26 08:27:30 pve9-nuc systemd[1]: Stopped rrdcached.service - Data caching daemon for rrdtool.
Aug 26 08:27:30 pve9-nuc systemd[1]: rrdcached.service: Consumed 24.821s CPU time, 8M memory peak.
Aug 26 08:27:30 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
Aug 26 12:30:36 pve9-nuc systemd[1]: Stopping rrdcached.service - Data caching daemon for rrdtool...
Aug 26 12:30:36 pve9-nuc systemd[1]: rrdcached.service: Deactivated successfully.
Aug 26 12:30:36 pve9-nuc systemd[1]: Stopped rrdcached.service - Data caching daemon for rrdtool.
Aug 26 12:30:36 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
Aug 26 12:34:37 pve9-nuc systemd[1]: Stopping rrdcached.service - Data caching daemon for rrdtool...
Aug 26 12:34:37 pve9-nuc systemd[1]: rrdcached.service: Deactivated successfully.
Aug 26 12:34:37 pve9-nuc systemd[1]: Stopped rrdcached.service - Data caching daemon for rrdtool.
-- Boot 8e48d20f6c4e4f81879acce595a1c202 --
Aug 26 12:35:06 pve9-nuc systemd[1]: Started rrdcached.service - Data caching daemon for rrdtool.
 
Alright, I think we’re making progress. At least now we can see the error clearly on rrddata:
Bash:
root@pve9-nuc:~# pvesh get nodes/pve9-nuc/rrddata --timeframe hour
RRD error: mmaping file '/var/lib/rrdcached/db/pve-node-9.0/pve9-nuc': Invalid argument

Depending on how important the historical data is, I would try the fix mentioned in [1][2]
Bash:
~# cd /var/lib/
~# systemctl stop rrdcached
~# mv rrdcached rrdcached.bck
~# systemctl start rrdcached
I hope this helps!

[1] RRDC and RRD update errors
[2] RRDC update errors
 
Last edited:
It don't helps ........

i get the following:
root@pve9-nuc:/var/lib# systemctl stop rrdcached
Stopping 'rrdcached.service', but its triggering units are still active:
rrdcached.socket
 
Now it works.

I probably wasn't allowed to delete the folder
/var/lib/rrdcache.

I manually
recreated
/var/lib/rrdcached and
/var/lib/rrdcached/db
and restarted. Now I have data...

Thanks!!! I understand better now.
 
Now it works.

I probably wasn't allowed to delete the folder
/var/lib/rrdcache.

I manually
recreated
/var/lib/rrdcached and
/var/lib/rrdcached/db
and restarted. Now I have data...

Thanks!!! I understand better now.
Please edit the first post of the thread and select solved from the pull-down menu.