Hello community,
I am seeking technical assistance regarding an incident that occurred on our Proxmox infrastructure on February 17, 2026.
At 14:25:56, our node pve2 experienced a reboot without reason. Initial analysis confirms that the High Availability stack successfully fenced node pve2 following a loss of communication with the rest of the cluster. Although the service has since been restored, we are seeking to identify the root cause of this isolation. It is important to note that all three nodes in this cluster are running Proxmox version 8.4 and that Corosync traffic is routed through a dedicated, physically separate network.
Furthermore, we observed a similar issue occurring on another infrastructure located in a different rack (not in the same day), despite that specific cluster running different Proxmox versions.
I have attached the logs recorded during the minutes surrounding the reboot of pve2, as well as the logs from pve1 to provide a complete overview of the cluster state during the event.
I would appreciate any insight if you have encountered similar cases or if you can identify a specific pattern in the provided logs.
I am seeking technical assistance regarding an incident that occurred on our Proxmox infrastructure on February 17, 2026.
At 14:25:56, our node pve2 experienced a reboot without reason. Initial analysis confirms that the High Availability stack successfully fenced node pve2 following a loss of communication with the rest of the cluster. Although the service has since been restored, we are seeking to identify the root cause of this isolation. It is important to note that all three nodes in this cluster are running Proxmox version 8.4 and that Corosync traffic is routed through a dedicated, physically separate network.
Furthermore, we observed a similar issue occurring on another infrastructure located in a different rack (not in the same day), despite that specific cluster running different Proxmox versions.
I have attached the logs recorded during the minutes surrounding the reboot of pve2, as well as the logs from pve1 to provide a complete overview of the cluster state during the event.
I would appreciate any insight if you have encountered similar cases or if you can identify a specific pattern in the provided logs.