pvestatd crash

Mar 28, 2024
9
0
1
Hi everyone, this is the first post,
I'm writing about a problem with a node, same problem on two different clusters in production.
in our datacenter we have 3 clusters with a total of 9 nodes.
Two of these the pvestatd service crashes often, every time I check the dashboard I have to restart the service. In this case it is a cluster of 2 nodes, and the other node which is identical in hardware and also in the number of VMs and resources assigned, the service never crashes. the datacenter perform a hardware test, RAM first without errors.
I checked the integrity of the packages via debsums, which reports FAILED only on some configuration files modified by the pve installer, i think:
/etc/issue FAILED
/etc/lvm/lvm.conf FAILED
/etc/cron.d/mdadm FAILED
/etc/apt/sources.list.d/pve-enterprise.list FAILED
/etc/systemd/timesyncd.conf FAILED
so everything seems fine.
in syslog, in the line before the first error:
pvedaemon [374198]: VM 201 qmp command failed - VM 201 qmp command 'query-proxmox-support' failed - got timeout
I find this:
Mar 27 22:55:57 pve2node1 pvestatd[38449]: qemu status update error: Can't locate object method "Handle=HASH(0x55eabbc33fc8)" via package "IO::Multiplex" at /usr/share/perl5/IO/ Multiplex.pm line 966.

pve-manager/7.4-17/513c62be
Linux 5.15.143-1-pve #1 SMP PVE 5.15.143-1
32 x 13th Gen Intel(R) Core(TM) i9-13900 (1 Socket)
126GB RAM
2 X 2TB NVME for root in MIRROR
and
2 x 4TB SSD Enterprise ZFS MIRROR
2 x 4TB SSD Enterprise ZFS MIRROR
thank's
 
Last edited:
the service continue to crash,
that is the journalctl -b pvestatd.service:

Mar 29 22:00:52 pve2node1 systemd[1]: Started PVE Status Daemon.
Mar 29 22:25:52 pve2node1 pvestatd[768068]: closing with write buffer at /usr/share/perl5/IO/Multiplex.pm line 928.
Mar 29 22:25:52 pve2node1 pvestatd[768068]: closing with write buffer at /usr/share/perl5/IO/Multiplex.pm line 928.
Mar 29 22:25:52 pve2node1 pvestatd[768068]: closing with read buffer at /usr/share/perl5/IO/Multiplex.pm line 927.
Mar 29 22:25:52 pve2node1 pvestatd[768068]: closing with read buffer at /usr/share/perl5/IO/Multiplex.pm line 927.
Mar 29 22:25:52 pve2node1 pvestatd[768068]: closing with read buffer at /usr/share/perl5/IO/Multiplex.pm line 927.
Mar 29 22:25:52 pve2node1 pvestatd[768068]: qemu status update error: Can't locate object method "Handle=HASH(0x55fd78c0d698)" via package "IO::Multiplex" at /usr/share/perl5/IO/Multiplex.pm line 966.
Mar 29 22:29:22 pve2node1 pvestatd[768068]: Use of uninitialized value $vmid in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer/Helpers.pm line 29.
Mar 29 22:29:22 pve2node1 pvestatd[768068]: Use of uninitialized value $vmid in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer/Helpers.pm line 29.
Mar 29 22:29:22 pve2node1 pvestatd[768068]: Use of uninitialized value $vmid in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer/Helpers.pm line 29.
Mar 29 22:29:22 pve2node1 pvestatd[768068]: Use of uninitialized value $vmid in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer/Helpers.pm line 29.
Mar 29 22:35:52 pve2node1 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
 
Hey. If you are able to boot into windows there are some Windows applications you can run which will stress test the system and report any CPU errors.

It maybe worth trying to tweak the CPU voltages up a little as the 13900/14900 seem to be rather power hungry and I believe the motherboards genric presets maybe a little conservative for some cpu. In my MSI motherboard the setting was 'CPU Lite Load' which needed increases by a couple.
 
Hi!
thank's for the reply, the server is in datacenter and motherboard is supermicro, i can't boot windows but i can tell that the cpu is always at 20-60%, never go up to 90%..
Now the datacenter is doing an hardware test
 
the hardware check, after 3 hours, has negative result, so all hardware is ok, and for 2 days all was ok, i thought the problem was solved by bios update and NIC firmware upgrade done during the hardware check, but today the server HUNG.
in the syslog i found that:
Apr 04 09:19:38 pve3n2 kernel: BUG: Bad page map in process pvestatd pte:8000000158beb845 pmd:1469ce067
this is the first error, go back until the nigth i found the first segfault:
Code:
Apr 04 01:51:14 pve3n2 kernel: pve-firewall[2205]: segfault at 0 ip 00006169e21ef6a4 sp 00007fff73e39570 error 4 in perl[6169e20a2000+195000] likely on CPU 12 (core 24, socket 0)
Apr 04 01:51:14 pve3n2 kernel: Code: 43 02 48 8d 0c 83 31 c0 48 39 d9 48 0f 45 c1 48 89 44 24 10 0f b6 43 01 48 8b 74 24 18 4c 8b 4e 10 4c 39 cd 0f 83 ec 00 00 00 <0f> b6 5d 00 48 81 fb c3 00 00 00 40 0f 9f c6 3d 96 00 00 00 0f 87

random error, so i think's ram related, but the hardware check of 3 hours tell is all right!!!
i must do memory change?
 
Last edited:
Hello. Encountering the same issue as the OP. Service pvestatd crashes with a segfault randomly but fairly often (at least once a day). This happens on only one node in a 5-node cluster (each node having exact hardware and configuration). Have done extensive hardware tests via the BIOS and hardware is perfectly healthy. Have applied latest microcode and currently on kernel 6.8.4-3-pve.

Dmesg output as follows:
Bash:
~# dmesg | grep segfault
[ 5076.767482] pveproxy worker[18172]: segfault at 636eff4116 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[ 5181.763977] pveproxy worker[18381]: segfault at a ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[ 5546.690105] pvestatd[1822]: segfault at 6650bb1a ip 000057741991d12a sp 00007ffceebc4d10 error 4 in perl[577419834000+195000] likely on CPU 12 (core 24, socket 0)
[ 8316.690294] pveproxy worker[24723]: segfault at 1f139ea08 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[ 8616.680978] pveproxy worker[26436]: segfault at f224435ff4 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[ 8856.678360] pveproxy worker[26791]: segfault at 80000008 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 13 (core 24, socket 0)
[ 9543.957317] pveproxy worker[27457]: segfault at e ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 13 (core 24, socket 0)
[ 9678.954812] pveproxy worker[29287]: segfault at 143486547dd ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[10401.639431] pveproxy worker[30022]: segfault at f60c3450e ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[10521.636035] pveproxy worker[31587]: segfault at 8c9df3228a ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[12816.588109] pveproxy worker[37063]: segfault at 500000008 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[14856.535753] pveproxy worker[43172]: segfault at 80000008 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[15621.517985] pveproxy worker[43767]: segfault at 4c6dcdcb ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[18516.448940] pveproxy worker[54137]: segfault at a ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[18893.745429] pveproxy worker[53687]: segfault at ffffffffffffffff ip 0000595a678602a3 sp 00007fffa8af70a0 error 5 in perl[595a67777000+195000] likely on CPU 14 (core 28, socket 0)
[19866.418617] pve_exporter[55628]: segfault at 1 ip 0000794f6301d915 sp 0000794f60e01a60 error 4 in libpython3.11.so.1.0[794f62f39000+1d4000] likely on CPU 2 (core 4, socket 0)
[27153.549490] pveproxy worker[77942]: segfault at 278a8454a ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[27711.237586] pveproxy worker[78566]: segfault at 100000008 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[29291.209876] pveproxy worker[85662]: segfault at ffffffffffffffff ip 0000595a678602a3 sp 00007fffa8af70a0 error 5 in perl[595a67777000+195000] likely on CPU 2 (core 4, socket 0)
[30921.168462] pve_exporter[58761]: segfault at 0 ip 0000794f62ff9975 sp 0000794f60e019e0 error 6 in libpython3.11.so.1.0[794f62f39000+1d4000] likely on CPU 13 (core 24, socket 0)
[31521.149088] pveproxy worker[91194]: segfault at 1e7ab ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[32013.461029] pveproxy worker[92297]: segfault at 9 ip 0000595a6786012a sp 00007fffa8af7530 error 4 in perl[595a67777000+195000] likely on CPU 13 (core 24, socket 0)
[32067.156804] pvestatd[35330]: segfault at ffffffffffffffff ip 000056fa3d1ac4cc sp 00007ffcacf108f0 error 7 in perl[56fa3d0c1000+195000] likely on CPU 13 (core 24, socket 0)
[33756.096622] pveproxy worker[96433]: segfault at 500000008 ip 0000595a6786012a sp 00007fffa8af7520 error 4 in perl[595a67777000+195000] likely on CPU 12 (core 24, socket 0)
[34851.071178] pveproxy worker[98445]: segfault at 5143ca903 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 13 (core 24, socket 0)
[40415.949315] pve_exporter[90666]: segfault at 4 ip 0000794f62fe8bcc sp 0000794f614017d0 error 4 in libpython3.11.so.1.0[794f62f39000+1d4000] likely on CPU 12 (core 24, socket 0)
[46835.797577] pveproxy worker[132373]: segfault at 5074074408 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[50090.721636] pveproxy worker[139583]: segfault at e3ce6d408 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[55850.590955] pveproxy worker[154503]: segfault at 47af1f9e ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[58772.831717] pvedaemon worke[158999]: segfault at e584 ip 0000623157cfe12a sp 00007fff7adadcd0 error 4 in perl[623157c15000+195000] likely on CPU 12 (core 24, socket 0)
[60830.475219] pveproxy worker[168108]: segfault at 1fd5a ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[61292.762251] pveproxy worker[168394]: segfault at a ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[66380.343845] pveproxy worker[181968]: segfault at 41132 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[70280.254653] pveproxy worker[190913]: segfault at 3996bd629 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[70430.262029] pvedaemon worke[186573]: segfault at e ip 0000623157cfd094 sp 00007fff7adadd90 error 4 in perl[623157c15000+195000] likely on CPU 12 (core 24, socket 0)
[73022.487331] pveproxy worker[197975]: segfault at 62b8a ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 13 (core 24, socket 0)
[73595.173248] pveproxy worker[197587]: segfault at 6d5d770808 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[74360.159400] pveproxy worker[199895]: segfault at 49227 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 13 (core 24, socket 0)
[75305.136560] pveproxy worker[202174]: segfault at 1b4c1f6fa ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[75875.123770] pveproxy worker[204210]: segfault at 7d72e ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[83507.242196] pve_exporter[211178]: segfault at ffffffffffffffff ip 0000794f63023907 sp 0000794f60e01860 error 5 in libpython3.11.so.1.0[794f62f39000+1d4000] likely on CPU 13 (core 24, socket 0)
[84709.919460] pveproxy worker[225471]: segfault at 84635e08 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[86957.175746] pvedaemon worke[231951]: segfault at 5513414e ip 0000623157cfe12a sp 00007fff7adadcd0 error 4 in perl[623157c15000+195000] likely on CPU 12 (core 24, socket 0)
[88894.822018] pveproxy worker[236370]: segfault at 80000008 ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)
[90514.784578] pveproxy worker[240999]: segfault at c ip 0000587e7b51d12a sp 00007fff37c26080 error 4 in perl[587e7b434000+195000] likely on CPU 12 (core 24, socket 0)

Versions as follows:
Bash:
~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.1
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.2-1
proxmox-backup-file-restore: 3.2.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2

Not really sure how to troubleshoot further. Happy to respond back quickly with the result of any/all suggestions. Thanks in advance.
 
Last edited:
Hi,
Hello. Encountering the same issue as the OP. Service pvestatd crashes with a segfault randomly but fairly often (at least once a day). This happens on only one node in a 5-node cluster (each node having exact hardware and configuration). Have done extensive hardware tests via the BIOS and hardware is perfectly healthy. Have applied latest microcode and currently on kernel 6.8.4-3-pve.
have you also run an extensive memory test with memtest86+? If there were no errors there, please try running debsums -s (you might need to install it with apt install debsums first) to see if anything in your Perl packages got corrupted.
 
Is there anything else in the journal around the time the issues happen? Did you already try booting with kernel 6.5 to see if the issue is related to the kernel?
 
Journal as follows:
Bash:
~# journalctl -u pvestatd
May 24 12:07:22 neh1-pve-p04 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
May 24 12:07:22 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 24 12:07:22 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 8min 39.994s CPU time.
-- Boot 577cbdb0adfb4a3bb63bc3b5e8cb992f --
May 24 14:24:34 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 24 14:24:34 neh1-pve-p04 pvestatd[1861]: starting server
May 24 14:24:34 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 24 14:27:14 neh1-pve-p04 pvestatd[1861]: VM 747325 qmp command failed - VM 747325 not running
May 24 14:27:14 neh1-pve-p04 pvestatd[1861]: VM 229085 qmp command failed - VM 229085 not running
May 24 14:27:17 neh1-pve-p04 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
May 24 14:27:17 neh1-pve-p04 pvestatd[1861]: received signal TERM
May 24 14:27:17 neh1-pve-p04 pvestatd[1861]: server closing
May 24 14:27:17 neh1-pve-p04 pvestatd[1861]: server stopped
May 24 14:27:18 neh1-pve-p04 systemd[1]: pvestatd.service: Deactivated successfully.
May 24 14:27:18 neh1-pve-p04 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
May 24 14:27:18 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 1.378s CPU time.
-- Boot 753006bf69ec417eb56e62015eed489f --
May 24 14:29:56 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 24 14:29:57 neh1-pve-p04 pvestatd[1833]: starting server
May 24 14:29:57 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 24 14:31:40 neh1-pve-p04 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
May 24 14:31:40 neh1-pve-p04 pvestatd[1833]: received signal TERM
May 24 14:31:40 neh1-pve-p04 pvestatd[1833]: server closing
May 24 14:31:40 neh1-pve-p04 pvestatd[1833]: server stopped
May 24 14:31:41 neh1-pve-p04 systemd[1]: pvestatd.service: Deactivated successfully.
May 24 14:31:41 neh1-pve-p04 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
May 24 14:31:41 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 1.143s CPU time.
-- Boot 26749ea5a1524114be8d27de4832dd54 --
May 24 14:34:22 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 24 14:34:22 neh1-pve-p04 pvestatd[1822]: starting server
May 24 14:34:22 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 24 16:06:42 neh1-pve-p04 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
May 24 16:06:42 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 24 16:06:42 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 21.324s CPU time.
May 24 17:52:43 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 24 17:52:43 neh1-pve-p04 pvestatd[35330]: starting server
May 24 17:52:43 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 24 18:30:13 neh1-pve-p04 pvestatd[35330]: VM 241461 qmp command failed - VM 241461 not running
May 24 23:28:43 neh1-pve-p04 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
May 24 23:28:43 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 24 23:28:43 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 1min 19.377s CPU time.
May 25 17:02:28 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 25 17:02:29 neh1-pve-p04 pvestatd[252960]: starting server
May 25 17:02:29 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 25 19:09:29 neh1-pve-p04 pvestatd[252960]: auth key pair too old, rotating..
May 26 03:00:49 neh1-pve-p04 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
May 26 03:00:49 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 26 03:00:49 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 2min 22.185s CPU time.
May 27 14:39:00 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 27 14:39:00 neh1-pve-p04 pvestatd[689567]: starting server
May 27 14:39:00 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 27 15:20:19 neh1-pve-p04 systemd[1]: Stopping pvestatd.service - PVE Status Daemon...
May 27 15:20:19 neh1-pve-p04 pvestatd[689567]: received signal TERM
May 27 15:20:19 neh1-pve-p04 pvestatd[689567]: server closing
May 27 15:20:19 neh1-pve-p04 pvestatd[689567]: server stopped
May 27 15:20:20 neh1-pve-p04 systemd[1]: pvestatd.service: Deactivated successfully.
May 27 15:20:20 neh1-pve-p04 systemd[1]: Stopped pvestatd.service - PVE Status Daemon.
May 27 15:20:20 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 10.082s CPU time.
-- Boot 705032bfbd8643e8987896437312db85 --
May 27 16:50:56 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 27 16:50:56 neh1-pve-p04 systemd[1]: pvestatd.service: Control process exited, code=killed, status=11/SEGV
May 27 16:50:56 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 27 16:50:56 neh1-pve-p04 systemd[1]: Failed to start pvestatd.service - PVE Status Daemon.
May 27 16:51:11 neh1-pve-p04 systemd[1]: Starting pvestatd.service - PVE Status Daemon...
May 27 16:51:12 neh1-pve-p04 pvestatd[2342]: starting server
May 27 16:51:12 neh1-pve-p04 systemd[1]: Started pvestatd.service - PVE Status Daemon.
May 28 12:17:22 neh1-pve-p04 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
May 28 12:17:22 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 28 12:17:22 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 4min 20.150s CPU time.

Kernel as follows:
Bash:
~# uname -a
Linux neh1-pve-p04 6.8.4-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-3 (2024-05-02T11:55Z) x86_64 GNU/Linux
 
Please share the full journal around the time the issue happens, not just for the specific unit.

Are the frequent boots also expected? Or is it that your whole system is crashing?

What physical CPU do you have?
 
Journal around crash time follows:
Bash:
May 28 11:27:19 neh1-pve-p04 pveproxy[191575]: worker exit
May 28 11:27:19 neh1-pve-p04 pveproxy[1921]: worker 191575 finished
May 28 11:27:19 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:27:19 neh1-pve-p04 pveproxy[1921]: worker 194822 started
May 28 11:28:13 neh1-pve-p04 pmxcfs[1224]: [dcdb] notice: data verification successful
May 28 11:28:24 neh1-pve-p04 pveproxy[194822]: Clearing outdated entries from certificate cache
May 28 11:29:06 neh1-pve-p04 consul[1101]: 2024-05-28T11:29:06.142Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 11:32:04 neh1-pve-p04 kernel: pveproxy worker[194822]: segfault at becc7d4008 ip 00005f3ae5b4312a sp 00007ffda1cccfe0 error 4 in perl[5f3ae5a5a000+195000] likely on CPU 12 (core 24, socket 0)
May 28 11:32:04 neh1-pve-p04 kernel: Code: ff 00 00 00 81 e2 00 00 00 04 75 11 49 8b 96 f8 00 00 00 48 89 10 49 89 86 f8 00 00 00 49 83 ae f0 00 00 00 01 4d 85 ff 74 19 <41> 8b 47 08 85 c0 0f 84 c2 00 00 00 83 e8 01 41 89 47 08 0f 84 05
May 28 11:32:04 neh1-pve-p04 pveproxy[1921]: worker 194822 finished
May 28 11:32:04 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:32:04 neh1-pve-p04 pveproxy[1921]: worker 195625 started
May 28 11:32:37 neh1-pve-p04 pveproxy[195625]: Clearing outdated entries from certificate cache
May 28 11:34:34 neh1-pve-p04 pveproxy[193864]: worker exit
May 28 11:34:34 neh1-pve-p04 pveproxy[1921]: worker 193864 finished
May 28 11:34:34 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:34:34 neh1-pve-p04 pveproxy[1921]: worker 196046 started
May 28 11:34:44 neh1-pve-p04 pveproxy[196046]: Clearing outdated entries from certificate cache
May 28 11:34:50 neh1-pve-p04 pveproxy[1921]: worker 193220 finished
May 28 11:34:50 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:34:50 neh1-pve-p04 pveproxy[1921]: worker 196096 started
May 28 11:34:51 neh1-pve-p04 pveproxy[196096]: Clearing outdated entries from certificate cache
May 28 11:34:53 neh1-pve-p04 pveproxy[196095]: worker exit
May 28 11:35:01 neh1-pve-p04 CRON[196126]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 28 11:35:01 neh1-pve-p04 CRON[196127]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
May 28 11:35:01 neh1-pve-p04 CRON[196126]: pam_unix(cron:session): session closed for user root
May 28 11:35:09 neh1-pve-p04 pmxcfs[1224]: [status] notice: received log
May 28 11:35:56 neh1-pve-p04 consul[1101]: 2024-05-28T11:35:56.720Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 11:36:19 neh1-pve-p04 kernel: pveproxy worker[196096]: segfault at 2000a29408 ip 00005f3ae5b4312a sp 00007ffda1cccfe0 error 4 in perl[5f3ae5a5a000+195000] likely on CPU 12 (core 24, socket 0)
May 28 11:36:19 neh1-pve-p04 kernel: Code: ff 00 00 00 81 e2 00 00 00 04 75 11 49 8b 96 f8 00 00 00 48 89 10 49 89 86 f8 00 00 00 49 83 ae f0 00 00 00 01 4d 85 ff 74 19 <41> 8b 47 08 85 c0 0f 84 c2 00 00 00 83 e8 01 41 89 47 08 0f 84 05
May 28 11:36:19 neh1-pve-p04 pveproxy[1921]: worker 196096 finished
May 28 11:36:19 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:36:19 neh1-pve-p04 pveproxy[1921]: worker 196351 started
May 28 11:36:54 neh1-pve-p04 pveproxy[196351]: Clearing outdated entries from certificate cache
May 28 11:40:31 neh1-pve-p04 consul[1101]: 2024-05-28T11:40:31.600Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 11:44:53 neh1-pve-p04 consul[1101]: 2024-05-28T11:44:53.285Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 11:45:01 neh1-pve-p04 CRON[197821]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 28 11:45:01 neh1-pve-p04 CRON[197822]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
May 28 11:45:01 neh1-pve-p04 CRON[197821]: pam_unix(cron:session): session closed for user root
May 28 11:47:49 neh1-pve-p04 pveproxy[196046]: worker exit
May 28 11:47:49 neh1-pve-p04 pveproxy[1921]: worker 196046 finished
May 28 11:47:49 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:47:49 neh1-pve-p04 pveproxy[1921]: worker 198297 started
May 28 11:48:19 neh1-pve-p04 pveproxy[195625]: worker exit
May 28 11:48:19 neh1-pve-p04 pveproxy[1921]: worker 195625 finished
May 28 11:48:19 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:48:19 neh1-pve-p04 pveproxy[1921]: worker 198383 started
May 28 11:49:30 neh1-pve-p04 consul[1101]: 2024-05-28T11:49:30.995Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 11:49:34 neh1-pve-p04 pveproxy[198383]: Clearing outdated entries from certificate cache
May 28 11:49:35 neh1-pve-p04 pveproxy[198297]: Clearing outdated entries from certificate cache
May 28 11:50:00 neh1-pve-p04 chronyd[1162]: Selected source 192.168.200.70 (ntp.nehalemcapital.com)
May 28 11:50:09 neh1-pve-p04 pmxcfs[1224]: [status] notice: received log
May 28 11:50:52 neh1-pve-p04 pvedaemon[192659]: worker exit
May 28 11:50:52 neh1-pve-p04 pvedaemon[1909]: worker 192659 finished
May 28 11:50:52 neh1-pve-p04 pvedaemon[1909]: starting 1 worker(s)
May 28 11:50:52 neh1-pve-p04 pvedaemon[1909]: worker 198813 started
May 28 11:50:59 neh1-pve-p04 smartd[990]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 53 to 52
May 28 11:50:59 neh1-pve-p04 smartd[990]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 47 to 48
May 28 11:54:19 neh1-pve-p04 pveproxy[196351]: worker exit
May 28 11:54:19 neh1-pve-p04 pveproxy[1921]: worker 196351 finished
May 28 11:54:19 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:54:19 neh1-pve-p04 pveproxy[1921]: worker 199404 started
May 28 11:54:49 neh1-pve-p04 pvedaemon[190784]: worker exit
May 28 11:54:49 neh1-pve-p04 pvedaemon[1909]: worker 190784 finished
May 28 11:54:49 neh1-pve-p04 pvedaemon[1909]: starting 1 worker(s)
May 28 11:54:49 neh1-pve-p04 pvedaemon[1909]: worker 199491 started
May 28 11:55:01 neh1-pve-p04 CRON[199520]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 28 11:55:01 neh1-pve-p04 CRON[199521]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
May 28 11:55:01 neh1-pve-p04 CRON[199520]: pam_unix(cron:session): session closed for user root
May 28 11:55:56 neh1-pve-p04 pveproxy[199404]: Clearing outdated entries from certificate cache
May 28 11:56:04 neh1-pve-p04 kernel: pveproxy worker[198297]: segfault at 100000008 ip 00005f3ae5b4312a sp 00007ffda1cccfe0 error 4 in perl[5f3ae5a5a000+195000] likely on CPU 12 (core 24, socket 0)
May 28 11:56:04 neh1-pve-p04 kernel: Code: ff 00 00 00 81 e2 00 00 00 04 75 11 49 8b 96 f8 00 00 00 48 89 10 49 89 86 f8 00 00 00 49 83 ae f0 00 00 00 01 4d 85 ff 74 19 <41> 8b 47 08 85 c0 0f 84 c2 00 00 00 83 e8 01 41 89 47 08 0f 84 05
May 28 11:56:04 neh1-pve-p04 pveproxy[1921]: worker 198297 finished
May 28 11:56:04 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 11:56:04 neh1-pve-p04 pveproxy[1921]: worker 199702 started
May 28 11:56:27 neh1-pve-p04 consul[1101]: 2024-05-28T11:56:27.229Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 11:58:03 neh1-pve-p04 pveproxy[199702]: Clearing outdated entries from certificate cache
May 28 12:01:04 neh1-pve-p04 pveproxy[198383]: worker exit
May 28 12:01:04 neh1-pve-p04 pveproxy[1921]: worker 198383 finished
May 28 12:01:04 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 12:01:04 neh1-pve-p04 pveproxy[1921]: worker 200546 started
May 28 12:02:10 neh1-pve-p04 consul[1101]: 2024-05-28T12:02:10.408Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 12:02:14 neh1-pve-p04 pveproxy[200546]: Clearing outdated entries from certificate cache
May 28 12:03:27 neh1-pve-p04 corosync[1773]:   [TOTEM ] Retransmit List: 773a8
May 28 12:04:34 neh1-pve-p04 pvedaemon[194238]: worker exit
May 28 12:04:34 neh1-pve-p04 pvedaemon[1909]: worker 194238 finished
May 28 12:04:34 neh1-pve-p04 pvedaemon[1909]: starting 1 worker(s)
May 28 12:04:34 neh1-pve-p04 pvedaemon[1909]: worker 201148 started
May 28 12:05:01 neh1-pve-p04 CRON[201223]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 28 12:05:01 neh1-pve-p04 CRON[201224]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
May 28 12:05:01 neh1-pve-p04 CRON[201223]: pam_unix(cron:session): session closed for user root
May 28 12:05:10 neh1-pve-p04 pmxcfs[1224]: [status] notice: received log
May 28 12:08:27 neh1-pve-p04 consul[1101]: 2024-05-28T12:08:27.738Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 12:11:49 neh1-pve-p04 pveproxy[199404]: worker exit
May 28 12:11:49 neh1-pve-p04 pveproxy[1921]: worker 199404 finished
May 28 12:11:49 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 12:11:49 neh1-pve-p04 pveproxy[1921]: worker 202385 started
May 28 12:12:47 neh1-pve-p04 pveproxy[202385]: Clearing outdated entries from certificate cache
May 28 12:13:04 neh1-pve-p04 pveproxy[199702]: worker exit
May 28 12:13:04 neh1-pve-p04 pveproxy[1921]: worker 199702 finished
May 28 12:13:04 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 12:13:04 neh1-pve-p04 pveproxy[1921]: worker 202592 started
May 28 12:14:52 neh1-pve-p04 pveproxy[200546]: worker exit
May 28 12:14:52 neh1-pve-p04 pveproxy[1921]: worker 200546 finished
May 28 12:14:52 neh1-pve-p04 pveproxy[1921]: starting 1 worker(s)
May 28 12:14:52 neh1-pve-p04 pveproxy[1921]: worker 202904 started
May 28 12:14:54 neh1-pve-p04 pveproxy[202592]: Clearing outdated entries from certificate cache
May 28 12:14:59 neh1-pve-p04 pveproxy[202904]: Clearing outdated entries from certificate cache
May 28 12:15:01 neh1-pve-p04 CRON[202932]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 28 12:15:01 neh1-pve-p04 CRON[202933]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
May 28 12:15:01 neh1-pve-p04 CRON[202932]: pam_unix(cron:session): session closed for user root
May 28 12:15:15 neh1-pve-p04 consul[1101]: 2024-05-28T12:15:15.718Z [INFO]  agent: Synced check: check=service:instance-identity-exporter
May 28 12:17:01 neh1-pve-p04 CRON[203279]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 28 12:17:01 neh1-pve-p04 CRON[203280]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 28 12:17:01 neh1-pve-p04 CRON[203279]: pam_unix(cron:session): session closed for user root
May 28 12:17:22 neh1-pve-p04 kernel: pvestatd[2342]: segfault at ffffffffffffffff ip 0000601f59c444cc sp 00007ffeed713980 error 7 in perl[601f59b59000+195000] likely on CPU 12 (core 24, socket 0)
May 28 12:17:22 neh1-pve-p04 kernel: Code: 8b 43 0c e9 6a ff ff ff 66 0f 1f 44 00 00 3c 02 0f 86 a0 00 00 00 0d 00 00 00 10 48 8b 55 10 89 45 0c 48 8b 45 00 48 8b 40 18 <c6> 44 02 ff 00 48 8b 45 00 48 8b 75 10 48 8b 40 18 e9 73 ff ff ff
May 28 12:17:22 neh1-pve-p04 systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV
May 28 12:17:22 neh1-pve-p04 systemd[1]: pvestatd.service: Failed with result 'signal'.
May 28 12:17:22 neh1-pve-p04 systemd[1]: pvestatd.service: Consumed 4min 20.150s CPU time.
May 28 12:17:24 neh1-pve-p04 consul[1101]: 2024-05-28T12:17:24.318Z [WARN]  agent: Check is now critical: check=service:pvestatd
May 28 12:17:24 neh1-pve-p04 consul[1101]: 2024-05-28T12:17:24.324Z [INFO]  agent: Synced check: check=service:pvestatd
May 28 12:17:34 neh1-pve-p04 consul[1101]: 2024-05-28T12:17:34.322Z [WARN]  agent: Check is now critical: check=service:pvestatd
May 28 12:17:44 neh1-pve-p04 consul[1101]: 2024-05-28T12:17:44.326Z [WARN]  agent: Check is now critical: check=service:pvestatd
May 28 12:17:54 neh1-pve-p04 consul[1101]: 2024-05-28T12:17:54.329Z [WARN]  agent: Check is now critical: check=service:pvestatd
May 28 12:18:04 neh1-pve-p04 consul[1101]: 2024-05-28T12:18:04.333Z [WARN]  agent: Check is now critical: check=service:pvestatd
May 28 12:18:14 neh1-pve-p04 consul[1101]: 2024-05-28T12:18:14.338Z [WARN]  agent: Check is now critical: check=service:pvestatd
May 28 12:18:24 neh1-pve-p04 consul[1101]: 2024-05-28T12:18:24.342Z [WARN]  agent: Check is now critical: check=service:pvestatd

^ Interestingly, pveproxy also seems to be having segfaults. ^

Regarding the "frequent" reboots: no, the system is not crashing. The system works fine as far as I can tell except for pvestatd getting killed. The reboots were to swap a NIC and run the hardware tests that were suggested (i.e. first the built-in BIOS tests and then memtest86+).

Physical CPU is: 24 x 13th Gen Intel(R) Core(TM) i7-13700 (1 Socket)
 
Last edited:
Wanted to post an update on this in case it helps someone in the future. It turns out that the CPU in the hypervisor machine was faulty. Faulty CPUs are pretty rare as far as things go, but that was what happened in this case.

If anyone sees repeated segfaults, consider running memtest86+ for 24 hours to rule out RAM being the issue. Afterwards, you can do an exhaustive CPU stress test using stress-ng and be sure to use the --validate option. This should be able to identify any CPU hardware defects.

Thanks @fiona and others for your input on this.
 
  • Like
Reactions: fiona

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!