pvestatd

  1. B

    pvestatd.pm/rebalance_lxc_containers - NUMA awareness?

    Hey, Is it possible to make the rebalance_lxc_containers function NUMA-aware? Currently it can assign LXCs across CCDs, which is not optimal. I have a Zen3 processor with two CCDs (NPS2 enabled in bios), so the OS is aware of it: node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 1 cpus...
  2. S

    Status: Unknown for all VMs and Drives - but they work fine?

    Hello, Been a 'browser' here for a few years, and been using Proxmox for several, but just started a new server, and have this never-before-seen issue. As per title, not only the VMs, but all the drives are marked as unknown (the VM numbers are wonky because I'm moving VMs from another node...
  3. W

    Cannot start pvedaemon after rebooting.

    Hi Guys ,I meet this problem when I found that the webUI couldn`t open. I try to check pve service and found this . [root@pve ~]# systemctl | grep pve etc-pve.mount loaded active...
  4. R

    [SOLVED] status unknown - vgs not responding

    Hi guys, I have a rather strange problem with my current proxmox configuration. The status of 2 out of 3 nodes always goes to unknown, about 3 minutes after restarting a node. In these 3 minutes the status is online. The node I restarted is working fine. Does anyone know what I have done wrong...
  5. A

    Can I stop pvestatd service ?

    Hi everyone , I need to know if I can stop pvestatd ? and if I stopped it will it affect anything of the running LXC containers or the health of the containers? Thank You
  6. S

    QMP Communication Issue (Auto-Resolved?)

    Hello everyone, Yesterday, our server rebooted normally after a blackout around 06:00 PM, and everything seemed fine. However, while checking the logs today, I noticed a QMP communication issue that occurred between 04:00 AM and 05:00 AM. The error message was: VM 111 qmp command failed -...
  7. C

    Pve --> Pbs / Pvestatd daemon generate "error fetching datastores - 401 Unauthorized"

    Hello proxmox world, I have 2 proxmox cluster (6 and 8). All of the Proxmox servers generate many errors like that on Proxmox server side : Aug 30 09:57:48 pveagir1 pvestatd[3808]: ct-pbsbrio: error fetching datastores - 401 Unauthorized Aug 30 10:02:08 pveagir1 pvestatd[3808]...
  8. O

    iscsi not coming up properly after host reboot

    We have hosts with multiple iSCSI datastores. After updating and rebooting a host, the host status shows a green check, but the iSCSI datastores display a grey question mark. Running `systemctl restart pvestatd` resolves the issue and the iSCSI datastores become available again. Before...
  9. bfwdd

    [SOLVED] multiple Hosts in cluster locking up after latest update to kernel 6.8.4

    Hi, we are running proxmox+ceph since 2017, 15 hosts, amd AMD Opteron(tm) Processor 6380 (2 Sockets) + AMD EPYC 7513. After the latest update on 8.Mai 2024 three opteron hosts are locked - red X and (no ping, no ssh, all vms with grey (?) mark) after reboot everything ok. After 6 hours two...
  10. A

    pvestad crash

    Hi, I have been using proxmox for a month now but I am getting this error: Apr 03 21:43:10 ServerAlex systemd[1]: pvestatd.service: Main process exited, code=killed, status=11/SEGV Apr 03 21:43:10 ServerAlex systemd[1]: pvestatd.service: Failed with result 'signal'. Apr 03 21:43:10 ServerAlex...
  11. S

    pvestatd crash

    Hi everyone, this is the first post, I'm writing about a problem with a node, same problem on two different clusters in production. in our datacenter we have 3 clusters with a total of 9 nodes. Two of these the pvestatd service crashes often, every time I check the dashboard I have to restart...
  12. O

    Can't start VMs

    Hi everyone, Got a multiple issues with one of my Proxmox installation. It started with the issue when question marks displayed on all machines and storage time to time. VMs were working at that time. I used to apply a fix as described here: [SOLVED] Promox question marks on all machines and...
  13. A

    PVE random crash / pvestatd.service killed

    Yesterday, my PVE crashed "out of nowhere" (I did not change any configuration, issue any command or such, just normal VMs running as ever). It ran flawlessly on exactly this hardware for 2 years now. Since the first thing that happened according to journalctl -xeb-1 was, that the...
  14. L

    pvestatd keep crashing for no reason

    pvestatd service keep crashing after sepaming sdn status update error: malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "(end of string)") at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 201. I tested on another proxmox machine and I...
  15. S

    pvestatd halts causing status of VMs to disappear

    Hi, I'm having problem with the status of VMs and LXCs disappearing and leaving a question mark. I've had this before on earlier releases of proxmox, but at one point on proxmox 7 it was gone, but I recently did a clean install of proxmox 8, and now it's back. So I figured out that the...
  16. A

    [SOLVED] [ PMX cluster ] Pool CEPH unavailable status on only one node

    Hello, On our cluster proxmox, Ceph pool "Pool_SSD" appeared to be unavailable on UI Proxmox on only one node. But we had no anomaly ( OSD Good / PG Replicat good ). We had'nt graph usage pool on this node and icons on pool was unavailable. Solv : Process pvestatd.service was running...
  17. P

    [SOLVED] Changing IP for pvestatd

    I have recently changed the IP for my HomeLab from '192.168.178.54' to '192.168.178.169'. Since doing so pvestatd still tries to connect to the old IP, which leads to a timeout and all my system info in the web interface not being accessible. Jun 25 09:55:16 pve pvestatd[1639]: lxc status...
  18. M

    Linstor Performance/Skalierungs Problem

    Guten Tag Zusammen, aktuell setzen wir ein Proxmox 6.4-4 zusammen mit Linstor 1.7.1 (DRBD 9.0.28) als verteilten Block-Storage für VM Disk-Images über 7 PVE Nodes ein. Uns ist aufgefallen, dass mit steigender Anzahl an Ressourcen und PVE-Nodes die Performance von Linstor erheblich...
  19. S

    Cluster Status Missing From External Metric Server Metrics

    Hello, recently i have deployed allot of small Proxmox VE Clusters which now have a need for monitoring. I have used Influx and Grafana to monitor Containers and VM's in the past and was very surprised to see that the External Metric Server doesn't supply any Metrics about the Cluster status...
  20. B

    pvestatd leak creates 3000+ processes, consumes all RAM & SWAP, and halts/reboots machine over a 4 hour cycle. Then repeats.

    This has occurred from the later PVE 6.1 updates (I think) and definitely all throughout PVE 6.2 (including 6.2-12) since about March 2020. Prior to this the system was rock-solid for about 1.5 years. Normal operation is ~10-12Gig usage (of 32G total). See attached picture for the cycle...