pvestatd spamming system log since todays updates

prmadmax

New Member
May 12, 2024
27
7
3
UK
www.phantomrebels.com
Hi, I am seeing the below spam the system log every 5 seconds since today's update?

I have restarted the service with no change, still posts to the system log every 5 seconds, any ideas if this is a bug or anything to worry about?

Code:
Sep 05 10:55:45 prox001 pvestatd[3449931]: status update time (6.023 seconds)
Sep 05 10:55:55 prox001 pvestatd[3449931]: status update time (5.413 seconds)
Sep 05 10:56:04 prox001 pvestatd[3449931]: status update time (5.031 seconds)
Sep 05 10:56:24 prox001 pvestatd[3449931]: status update time (5.482 seconds)
Sep 05 10:56:35 prox001 pvestatd[3449931]: status update time (5.266 seconds)
Sep 05 10:56:44 prox001 pvestatd[3449931]: status update time (5.086 seconds)
Sep 05 10:56:54 prox001 pvestatd[3449931]: status update time (5.055 seconds)
Sep 05 10:57:04 prox001 pvestatd[3449931]: status update time (5.123 seconds)
Sep 05 10:57:14 prox001 pvestatd[3449931]: status update time (5.272 seconds)
Sep 05 10:57:25 prox001 pvestatd[3449931]: status update time (5.397 seconds)
Sep 05 10:57:34 prox001 pvestatd[3449931]: status update time (5.273 seconds)
 
Only a developer can tell you for sure. I thought it might be related to cluster communication issues but since you have none...
 
Most often this means a storage is slow to respond thus delaying the status check that pvestatd periodically does significantly.
Check how long executing pvesm status in a root shell requires, that might give you some hints.

If not, you really need to tell us more about this setup, how many nodes, how many virtual guests, what hardware, what storages, ...
 
I think I got the same issue, very often pvestatd will just sit there and consume a whole bunch of cpu resources (apparently 100% of one core) and then log this entry as stated before:
Screenshot From 2025-09-05 22-45-26.png

it is a single server with a single VM at the moment. Supermicro X10DRI with dual E5-2690v4 CPUs and 512GB of RAM.
Local storage is a LSI MegaRAID 9361-8I RAID controller and lvm.
pvesm status takes about 1-3 seconds to execute on the shell:

Code:
Name             Type     Status           Total            Used       Available        %
data          lvmthin     active       852008960        37573595       814435364    4.41%
local             dir     active        26557412         9670908        15512092   36.42%
local-lvm     lvmthin     active        20803584               0        20803584    0.00%

Kernel Version

Linux 6.14.8-2-pve (2025-07-22T10:04Z)
Boot Mode

EFI
Manager Version

pve-manager/9.0.6/49c767b70aeb6648
 
Most often this means a storage is slow to respond thus delaying the status check that pvestatd periodically does significantly.
Check how long executing pvesm status in a root shell requires, that might give you some hints.

If not, you really need to tell us more about this setup, how many nodes, how many virtual guests, what hardware, what storages, ...
Hi,

Its a Dell R640. 2x 250GB SSD 870 Evo Drives in mirror-0 which has proxmox on them. The other 6 drives are 500Gb 870 Evo in raidz2.

2x Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz (2 Sockets)
Kernel Version: Linux 6.14.8-2-pve (2025-07-22T10:04Z)
Boot Mode: EFI (Secure Boot)
pve-manager/9.0.6/49c767b70aeb6648
256 GiB Ram

Code:
root@prox001:~# pvesm status
Name                   Type     Status           Total            Used       Available        %
local                   dir     active       227723136        23943040       203780096   10.51%
local-zfs           zfspool     active       203780276              96       203780180    0.00%
vm_data             zfspool     active      1883847680       702799978      1181047701   37.31%
root@prox001:~#

The issues only started after applying the latest updates, which there were not many of for me since I check a couple of times a week.

12 running VMs - 9 windows - rest are linux based.

IO delay is between 0.10 and 0.22 this is a very responsive setup for the disks. Uptime is 24 days when it was last updated and rebooted - other updates have been applied since.
 
Last edited:
With your list of storage pools, 3s is pretty long. You can try to "mask" all but one pool at a time in your storage.cfg to see if one in particular takes too long.

Here is an example of a much larger set returning much faster:
time pvesm status
Name Type Status Total Used Available %
bb-iscsi blockbridge active 64424509440 3987936944 60436572496 6.19%
bb-nvme blockbridge active 64424509440 3987936944 60436572496 6.19%
iso nfs active 337558528 100905728 236652800 29.89%
local dir active 20466256 14961888 4439408 73.11%
nfs nfs active 289793408 53140608 236652800 18.34%
optane-lvm lvmthin active 68837376 0 68837376 0.00%
sb_group lvm inactive 0 0 0 0.00%

real 0m1.004s
user 0m0.746s
sys 0m0.133s

You can also try to time "zpool list" or similar ZFS commands.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox