after latest upgrade proxmox-ve 9 pvestatd very slow

Paolo Marinelli

Renowned Member
Mar 24, 2016
22
2
68
61
i investigate and on kernel 6.14.11-2-pve it took long time to access to smaps_rollup on /proc/XXXX
and pvestatd become very slow and cpu 100% bound

i traced "qm list" and i discover very long time to do lseek on /proc/XXXX/smaps_rollup

14937 3748405 0.000060 lseek(5, 0, SEEK_CUR) = 0
14938 3748405 0.000047 fstat(5, {st_dev=makedev(0, 0x1d), st_ino=2126262, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=0, st_size=0, st_atime=1760490876 /* 2025-10-15T03:14:36.716230683+0200 */, st_atime_nsec=716230683, st_mtime=1760490876 /* 2025-10-15T03:14:36.716230683+0200 */, st_mtime_nsec=716230683, st_ctime=1760490876 /* 2025-10-15T03:14:36.716230683+0200 */, st_ctime_nsec=716230683}) = 0
14939 3748405 0.000081 read(5, "2541332\n", 8192) = 8
14940 3748405 0.000079 openat(AT_FDCWD, "/proc/2541332/smaps_rollup", O_RDONLY|O_CLOEXEC) = 6
14941 3748405 0.000079 ioctl(6, TCGETS, 0x7ffeccea5af0) = -1 ENOTTY (Inappropriate ioctl for device)
14942 3748405 0.000057 lseek(6, 0, SEEK_CUR) = 0
14943 3748405 0.000048 fstat(6, {st_dev=makedev(0, 0x18), st_ino=43643303, st_mode=S_IFREG|0444, st_nlink=1, st_uid=0, st_gid=0, st_blksize=1024, st_blocks=0, st_size=0, st_atime=1760490892 /* 2025-10-15T03:14:52.168763214+0200 */, st_atime_nsec=168763214, st_mtime=1760490892 /* 2025-10-15T03:14:52.168763214+0200 */, st_mtime_nsec=168763214, st_ctime=1760490892 /* 2025-10-15T03:14:52.168763214+0200 */, st_ctime_nsec=168763214}) = 0
14944 3748405 0.000082 read(6, "5ae9ceda8000-7ffe55050000 ---p 0"..., 8192) = 698
14945 3748405 0.162241 lseek(6, 138, SEEK_SET) = 138
14946 3748405 0.149105 lseek(6, 0, SEEK_CUR) = 138
14947 3748405 0.000041 close(6) = 0
14948 3748405 0.000053 read(5, "", 8192) = 0

so running "qm list" take very long time

for about 55 vm running with windos2019

time qm list

real 0m24.988s
user 0m1.333s
sys 0m23.618s

this behaviour also
is a source of high load of pvestatd always on 100% cpu

journalctl -u pvestatd
....
Oct 15 15:12:43 srvprox11 pvestatd[3651690]: status update time (25.808 seconds)
Oct 15 15:13:07 srvprox11 pvestatd[3651690]: status update time (24.125 seconds)
Oct 15 15:13:31 srvprox11 pvestatd[3651690]: status update time (23.365 seconds)
Oct 15 15:13:54 srvprox11 pvestatd[3651690]: status update time (22.867 seconds)
Oct 15 15:14:18 srvprox11 pvestatd[3651690]: status update time (24.356 seconds)

on another cluster with same release and kernel i dont gave any problem
 
Hello Proxmox community!

I’m surprised there hasn’t been more feedback on this topic.

Since upgrading to PVE 9, I’ve also noticed excessive CPU usage by pvestatd.

When I compare the CPU time consumed to the server’s uptime (about 6 days), I’m seeing around 90% of one CPU core used by pvestatd.
(And that’s a generous estimate, according to top, the process seems to constantly sit around 99% CPU usage :rolleyes:)

This happens both on kernel 6.14 and 6.17.

Thanks to the whole community, keep up the great work!