pvestatd[15200]: status update time

hidalgo

Renowned Member
Nov 11, 2016
60
0
71
58
I'm running a Proxmox cluster with 2 nodes. Everything works fine but something is irritating: Every 10 seconds there's an entry in the syslog, on both nodes
Code:
Dec 20 08:45:09 phys1 pvestatd[15200]: status update time (5.233 seconds)
Dec 20 08:45:20 phys1 pvestatd[15200]: status update time (5.323 seconds)
Dec 20 08:45:29 phys1 pvestatd[15200]: status update time (5.251 seconds)
Dec 20 08:45:39 phys1 pvestatd[15200]: status update time (5.222 seconds)
Dec 20 08:45:50 phys1 pvestatd[15200]: status update time (5.242 seconds)
Dec 20 08:45:59 phys1 pvestatd[15200]: status update time (5.211 seconds)
Dec 20 08:46:09 phys1 pvestatd[15200]: status update time (5.226 seconds)
Dec 20 08:46:19 phys1 pvestatd[15200]: status update time (5.245 seconds)
Dec 20 08:46:29 phys1 pvestatd[15200]: status update time (5.221 seconds)
Dec 20 08:46:40 phys1 pvestatd[15200]: status update time (5.246 seconds)
Dec 20 08:46:49 phys1 pvestatd[15200]: status update time (5.228 seconds)
Dec 20 08:46:59 phys1 pvestatd[15200]: status update time (5.223 seconds)
Dec 20 08:47:09 phys1 pvestatd[15200]: status update time (5.239 seconds)
Dec 20 08:47:20 phys1 pvestatd[15200]: status update time (5.227 seconds)
Dec 20 08:47:29 phys1 pvestatd[15200]: status update time (5.223 seconds)
Dec 20 08:47:39 phys1 pvestatd[15200]: status update time (5.238 seconds)
What does it mean? Is something wrong and I don't know it?
 
This is just informational - maybe a slow storage delays status updates. What is
the output of:

# time pvesm status
Code:
root@phys0:~# time pvesm status
iscsi-zfs    zfs 1       3770679296        23349856      3747329440 1.12%
local        dir 1        259692800         7555584       252137216 3.41%
local-zfs  zfspool 1      261848448         9711140       252137308 4.21%
omv-nfs      nfs 1       3747329536               0      3747329536 0.50%

real    0m5.910s
user    0m0.700s
sys    0m0.096s
 
I’m no sure. I’m running both services NFS and iSCSI from Openmediavault, that runs as a VM on one of my nodes. How can I figure that out?
 
Did you ever figure this out? I'm having the same issue, with a similar setup to you. (I have a NFS and iSCSI services on FreeNAS that shares storage to PVE for backups and live images).

Code:
root@Orion:~# time pvesm status
  WARNING: Device /dev/sdb has size of 4294967296 sectors which is smaller than corresponding PV size of 10737418240 sectors. Was device resized?
  One or more devices used as PVs in VG shared have changed sizes.
Name              Type     Status           Total            Used       Available        %
iSCSI            iscsi     active               0               0               0    0.00%
local              dir   disabled               0               0               0      N/A
local-lvm      lvmthin   disabled               0               0               0      N/A
shared             lvm     active      5368705024       216006656      5152698368    4.02%
shared-nfs         nfs     active     28197202944      2140902144     26056300800    7.59%

real    0m0.432s
user    0m0.340s
sys     0m0.067s

The device resized warning is because when I had first created the iSCSI extent, I allocated 5TB to it then realized that was more than I needed so I downsized it to 2TB while there was no data to get destroyed.
 
Not to resurrect an old thread but I'm seeing these errors as well as influxdb read timeouts but `time pvesm status` always returns in less than a second:

Code:
❯ time pvesm status
Name                      Type     Status           Total            Used       Available        %
cluster                    rbd     active      1261163210        86719562      1174443648    6.88%
clusterfs-capacity      cephfs     active     21183918080     15460179968      5723738112   72.98%
local                      dir     active        28465204         6792068        20201856   23.86%
local-lvm              lvmthin     active       448536576       303748969       144787606   67.72%
local-lvm-sas          lvmthin   disabled               0               0               0      N/A
local-lvm-sata         lvmthin   disabled               0               0               0      N/A
rbd-capacity               rbd     active      5726603672         2864024      5723739648    0.05%
pvesm status  0.31s user 0.07s system 81% cpu 0.466 total