Proxmox physical host automatically reboot without error logs

okela

New Member
Dec 15, 2023
7
0
1
Hi everyone, I have a 4-node cluster PVE. Recently, my node (named Proxmox102) has automatically rebooted 2 times in 4 days. First time on 10th Dec and second time on 15th Dec. The syslog on GUI show nothing rather than "-- Reboot --". I also have 1 VM which is configured HA (I doubt that this issue related to HA because before configuring HA it has not auto rebooted). Please help me solve this problem. Thanks a lot ! (My PVE version 7.4-3)


Code:
ec 15 00:46:26 Proxmox102 pvestatd[1705]: status update time (7.346 seconds)
Dec 15 00:46:37 Proxmox102 pvestatd[1705]: status update time (7.355 seconds)
Dec 15 00:46:46 Proxmox102 pvestatd[1705]: status update time (7.226 seconds)
Dec 15 00:46:56 Proxmox102 pvestatd[1705]: status update time (7.453 seconds)
Dec 15 00:47:07 Proxmox102 pvestatd[1705]: status update time (7.636 seconds)
Dec 15 00:47:16 Proxmox102 pvestatd[1705]: status update time (7.402 seconds)
Dec 15 00:47:27 Proxmox102 pvestatd[1705]: status update time (7.415 seconds)
Dec 15 00:47:36 Proxmox102 pvestatd[1705]: status update time (7.592 seconds)
Dec 15 00:47:47 Proxmox102 pvestatd[1705]: status update time (7.440 seconds)
Dec 15 00:47:56 Proxmox102 pvestatd[1705]: status update time (7.373 seconds)
Dec 15 00:48:07 Proxmox102 pvestatd[1705]: status update time (7.599 seconds)
Dec 15 00:48:17 Proxmox102 pvestatd[1705]: status update time (7.772 seconds)
Dec 15 00:48:26 Proxmox102 pvestatd[1705]: status update time (7.479 seconds)
Dec 15 00:48:36 Proxmox102 pvestatd[1705]: status update time (7.244 seconds)
Dec 15 00:48:47 Proxmox102 pvestatd[1705]: status update time (7.641 seconds)
Dec 15 00:49:02 Proxmox102 pvescheduler[2537148]: INFO: Finished Backup of VM 101 (00:48:55)
Dec 15 00:49:02 Proxmox102 pvescheduler[2537148]: INFO: Backup job finished successfully
-- Reboot --
Dec 15 00:58:46 Proxmox102 kernel: Linux version 5.15.102-1-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PVE 5.15.102-1 (2023-03-14T13:48Z) ()
Dec 15 00:58:46 Proxmox102 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.15.102-1-pve root=/dev/mapper/pve-root ro quiet
Dec 15 00:58:46 Proxmox102 kernel: KERNEL supported cpus:
Dec 15 00:58:46 Proxmox102 kernel:   Intel GenuineIntel
Dec 15 00:58:46 Proxmox102 kernel:   AMD AuthenticAMD
Dec 15 00:58:46 Proxmox102 kernel:   Hygon HygonGenuine
Dec 15 00:58:46 Proxmox102 kernel:   Centaur CentaurHauls
Dec 15 00:58:46 Proxmox102 kernel:   zhaoxin   Shanghai 
Dec 15 00:58:46 Proxmox102 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 15 00:58:46 Proxmox102 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 15 00:58:46 Proxmox102 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 15 00:58:46 Proxmox102 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 15 00:58:46 Proxmox102 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
 

Attachments

  • GUI_syslog_12-15.png
    GUI_syslog_12-15.png
    83.4 KB · Views: 6
  • GUI_syslog_12-10.png
    GUI_syslog_12-10.png
    48.8 KB · Views: 6
Please post a few details about your system and also pveversion -v. If you have an IPMI interface, please check whether errors were logged there.

Since you're still running the 5 kernel, an upgrade might not hurt your environment. But whether it helps is another topic.
 
Please post a few details about your system and also pveversion -v. If you have an IPMI interface, please check whether errors were logged there.

Since you're still running the 5 kernel, an upgrade might not hurt your environment. But whether it helps is another topic.
-This is my pveversion -v and how to check whether I have an IPMI? Thank you!
root@Proxmox102:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1[/CODE]
 
Please post a few details about your system and also pveversion -v. If you have an IPMI interface, please check whether errors were logged there.

Since you're still running the 5 kernel, an upgrade might not hurt your environment. But whether it helps is another topic.
The node that auto rebooted has 2 CPU Intel(R) Xeon(R) CPU E5-2680 v3 and 128gb ram, I use ext4 filesystem
 
Hi okela,
before configuring HA it has not auto rebooted

How many nodes are there in HA, and what kind of connectivity? I recall HA working within quite tight tolerances, kicking a perceived offline node into reboot at the slightest delay.
Without an HA setup, I have no experience with the symptoms though, and can not say whether anything would show up in syslog.

Dec 15 00:49:02 Proxmox102 pvescheduler[2537148]: INFO: Backup job finished successfully -- Reboot --

Nice that it waited for the backup to finish. Was that the case for both reboots, as far as you can find?
 
and how to check whether I have an IPMI? Thank you!
You usually know this because you explicitly choose this feature. Since you have a dual-socket system, it can be assumed that you have this too. Dual socket is more common in the enterprise environment. The IPMI can also be called iLO or iDRAC - at the end of the day it's the same thing.

You didn't give us any further details about your server, so I can only guess here.

Please also post a syslog again showing the last 5 - 10 minutes before the failure.
 
You usually know this because you explicitly choose this feature. Since you have a dual-socket system, it can be assumed that you have this too. Dual socket is more common in the enterprise environment. The IPMI can also be called iLO or iDRAC - at the end of the day it's the same thing.

You didn't give us any further details about your server, so I can only guess here.

Please also post a syslog again showing the last 5 - 10 minutes before the failure.
My Dell server has IDRAC but I never used it. This is the syslog before reboot

Code:
Dec 15 00:40:54 Proxmox102 pvestatd[1705]: status update time (5.307 seconds)
Dec 15 00:41:05 Proxmox102 pvestatd[1705]: status update time (5.140 seconds)
Dec 15 00:41:14 Proxmox102 pvestatd[1705]: status update time (5.304 seconds)
Dec 15 00:41:21 Proxmox102 pmxcfs[1593]: [dcdb] notice: data verification successful
Dec 15 00:41:24 Proxmox102 pvestatd[1705]: status update time (5.400 seconds)
Dec 15 00:41:34 Proxmox102 pvestatd[1705]: status update time (5.191 seconds)
Dec 15 00:41:45 Proxmox102 pvestatd[1705]: status update time (5.264 seconds)
Dec 15 00:41:54 Proxmox102 pvestatd[1705]: status update time (5.208 seconds)
Dec 15 00:42:04 Proxmox102 pvestatd[1705]: status update time (5.256 seconds)
Dec 15 00:42:15 Proxmox102 pvestatd[1705]: status update time (5.398 seconds)
Dec 15 00:42:24 Proxmox102 pvestatd[1705]: status update time (5.160 seconds)
Dec 15 00:42:34 Proxmox102 pvestatd[1705]: status update time (5.200 seconds)
Dec 15 00:42:44 Proxmox102 pvestatd[1705]: status update time (5.385 seconds)
Dec 15 00:42:54 Proxmox102 pvestatd[1705]: status update time (5.166 seconds)
Dec 15 00:43:05 Proxmox102 pvestatd[1705]: status update time (5.382 seconds)
Dec 15 00:43:14 Proxmox102 pvestatd[1705]: status update time (5.337 seconds)
Dec 15 00:43:24 Proxmox102 pvestatd[1705]: status update time (5.215 seconds)
Dec 15 00:43:35 Proxmox102 pvestatd[1705]: status update time (5.105 seconds)
Dec 15 00:43:44 Proxmox102 pvestatd[1705]: status update time (5.173 seconds)
Dec 15 00:43:56 Proxmox102 pvestatd[1705]: status update time (7.161 seconds)
Dec 15 00:44:06 Proxmox102 pvestatd[1705]: status update time (7.568 seconds)
Dec 15 00:44:17 Proxmox102 pvestatd[1705]: status update time (7.385 seconds)
Dec 15 00:44:26 Proxmox102 pvestatd[1705]: status update time (7.489 seconds)
Dec 15 00:44:37 Proxmox102 pvestatd[1705]: status update time (7.364 seconds)
Dec 15 00:44:46 Proxmox102 pvestatd[1705]: status update time (7.373 seconds)
Dec 15 00:44:56 Proxmox102 pvestatd[1705]: status update time (7.363 seconds)
Dec 15 00:45:07 Proxmox102 pvestatd[1705]: status update time (7.557 seconds)
Dec 15 00:45:16 Proxmox102 pvestatd[1705]: status update time (7.360 seconds)
Dec 15 00:45:27 Proxmox102 pvestatd[1705]: status update time (7.508 seconds)
Dec 15 00:45:36 Proxmox102 pvestatd[1705]: status update time (7.412 seconds)
Dec 15 00:45:47 Proxmox102 pvestatd[1705]: status update time (7.533 seconds)
Dec 15 00:45:56 Proxmox102 pvestatd[1705]: status update time (7.236 seconds)
Dec 15 00:46:06 Proxmox102 pvestatd[1705]: status update time (7.406 seconds)
Dec 15 00:46:17 Proxmox102 pvestatd[1705]: status update time (7.482 seconds)
Dec 15 00:46:26 Proxmox102 pvestatd[1705]: status update time (7.346 seconds)
Dec 15 00:46:37 Proxmox102 pvestatd[1705]: status update time (7.355 seconds)
Dec 15 00:46:46 Proxmox102 pvestatd[1705]: status update time (7.226 seconds)
Dec 15 00:46:56 Proxmox102 pvestatd[1705]: status update time (7.453 seconds)
Dec 15 00:47:07 Proxmox102 pvestatd[1705]: status update time (7.636 seconds)
Dec 15 00:47:16 Proxmox102 pvestatd[1705]: status update time (7.402 seconds)
Dec 15 00:47:27 Proxmox102 pvestatd[1705]: status update time (7.415 seconds)
Dec 15 00:47:36 Proxmox102 pvestatd[1705]: status update time (7.592 seconds)
Dec 15 00:47:47 Proxmox102 pvestatd[1705]: status update time (7.440 seconds)
Dec 15 00:47:56 Proxmox102 pvestatd[1705]: status update time (7.373 seconds)
Dec 15 00:48:07 Proxmox102 pvestatd[1705]: status update time (7.599 seconds)
Dec 15 00:48:17 Proxmox102 pvestatd[1705]: status update time (7.772 seconds)
Dec 15 00:48:26 Proxmox102 pvestatd[1705]: status update time (7.479 seconds)
Dec 15 00:48:36 Proxmox102 pvestatd[1705]: status update time (7.244 seconds)
Dec 15 00:48:47 Proxmox102 pvestatd[1705]: status update time (7.641 seconds)
Dec 15 00:49:02 Proxmox102 pvescheduler[2537148]: INFO: Finished Backup of VM 101 (00:48:55)
Dec 15 00:49:02 Proxmox102 pvescheduler[2537148]: INFO: Backup job finished successfully
-- Reboot --
 
Hi okela,


How many nodes are there in HA, and what kind of connectivity? I recall HA working within quite tight tolerances, kicking a perceived offline node into reboot at the slightest delay.
Without an HA setup, I have no experience with the symptoms though, and can not say whether anything would show up in syslog.



Nice that it waited for the backup to finish. Was that the case for both reboots, as far as you can find?
hi mate, I have 4 node cluster, I just config HA for 1 VM, they connect in just one subnet. I also doubt that HA caused this but haven't find anything related HA in logs. This is the other reboot log

Code:
Dec 10 23:25:09 Proxmox102 pvestatd[1785]: storage 'Bk-Staging-112' is not online
Dec 10 23:25:12 Proxmox102 pvestatd[1785]: storage 'BK-TrueNas-200' is not online
Dec 10 23:25:15 Proxmox102 pvestatd[1785]: storage 'BK-Server-Oteam-111' is not online
Dec 10 23:25:18 Proxmox102 pvestatd[1785]: storage 'BK-Server-AI-175' is not online
Dec 10 23:25:21  pvestatd[1785]:
-- Reboot --
Dec 10 23:29:18 Proxmox102 kernel: Linux version 5.15.102-1-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PVE 5.15.102-1 (2023-03-14T13:48Z) ()
Dec 10 23:29:18 Proxmox102 kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-5.15.102-1-pve root=/dev/mapper/pve-root ro quiet
Dec 10 23:29:18 Proxmox102 kernel: KERNEL supported cpus:
Dec 10 23:29:18 Proxmox102 kernel:   Intel GenuineIntel
Dec 10 23:29:18 Proxmox102 kernel:   AMD AuthenticAMD
Dec 10 23:29:18 Proxmox102 kernel:   Hygon HygonGenuine
Dec 10 23:29:18 Proxmox102 kernel:   Centaur CentaurHauls
Dec 10 23:29:18 Proxmox102 kernel:   zhaoxin   Shanghai
 
This is bad because the iDRAC potentially tells you what's going on. There you will find in the log, for example, messages about the watchdog, whether there is an ECC error, the power limits have been exceeded or other errors and problems have occurred.

Install the ipmitool and run the command ipmitool sel list.

Please also the two commands again: pvecm status and ha-manager status.
 
This is bad because the iDRAC potentially tells you what's going on. There you will find in the log, for example, messages about the watchdog, whether there is an ECC error, the power limits have been exceeded or other errors and problems have occurred.

Install the ipmitool and run the command ipmitool sel list.

Please also the two commands again: pvecm status and ha-manager status.
Hi, sorry for late reply, I have deleted the HA configuration of my VM before using these commands
*ha-manager status

Code:
ha-manager status
quorum OK
master proxmox122 (active, Mon Dec 18 13:37:53 2023)
lrm Proxmox102 (idle, Mon Dec 18 13:37:53 2023)
lrm Proxmox106 (idle, Mon Dec 18 13:37:53 2023)
lrm Proxmox121 (idle, Mon Dec 18 13:37:55 2023)
lrm proxmox122 (idle, Mon Dec 18 13:37:53 2023

*pvecm status

Code:
pvecm status
Cluster information
-------------------
Name:             HA-PROXMOX
Config Version:   4
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Dec 18 13:38:30 2023
Quorum provider:  corosync_votequorum
Nodes:            4
Node ID:          0x00000003
Ring ID:          1.35
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      4
Quorum:           3 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.105.121
0x00000002          1 192.168.105.122
0x00000003          1 192.168.105.102 (local)
0x00000004          1 192.168.105.106

*ipmitool sel list

Code:
ipmitool sel list
   1 | 09/14/2022 | 16:52:05 | Event Logging Disabled #0x72 | Log area reset/cleared | Asserted
   2 | 09/19/2022 | 09:53:32 | Physical Security #0x73 | General Chassis intrusion () | Asserted
   3 | 09/19/2022 | 09:53:37 | Physical Security #0x73 | General Chassis intrusion () | Deasserted
   4 | 09/19/2022 | 09:53:40 | Power Supply #0x63 | Power Supply AC lost | Asserted
   5 | 09/19/2022 | 10:17:05 | Drive Slot / Bay #0xa0 | Drive Present () | Deasserted
   6 | 09/19/2022 | 10:22:42 | Drive Slot / Bay #0xa0 | Drive Present () | Asserted
   7 | 10/20/2022 | 16:31:54 | Power Supply #0x63 | Power Supply AC lost | Asserted
   8 | 10/20/2022 | 17:06:20 | Power Supply #0x63 | Power Supply AC lost | Asserted
   9 | 10/20/2022 | 17:30:08 | Power Supply #0x63 | Power Supply AC lost | Deasserted
   a | 02/13/2023 | 19:06:07 | Voltage #0x2c | State Asserted | Asserted
   b | 02/13/2023 | 19:06:11 | Power Supply #0x63 | Failure detected () | Asserted
   c | 03/03/2023 | 11:34:56 | Power Supply #0x63 | Power Supply AC lost | Asserted
   d | 03/03/2023 | 11:35:31 | Power Supply #0x63 | Presence detected | Deasserted
   e | 03/03/2023 | 11:35:32 | Power Supply #0x63 | Failure detected | Deasserted
   f | 03/03/2023 | 11:37:22 | Power Supply #0x63 | Presence detected | Asserted
  10 | 03/03/2023 | 16:12:29 | Drive Slot / Bay #0xa1 | Drive Present () | Deasserted
  11 | 03/03/2023 | 16:12:31 | Drive Slot / Bay #0xa0 | Drive Present () | Deasserted
  12 | 03/04/2023 | 08:22:27 | Physical Security #0x73 | General Chassis intrusion () | Asserted
  13 | 03/04/2023 | 08:22:32 | Physical Security #0x73 | General Chassis intrusion () | Deasserted
  14 | 03/04/2023 | 08:24:15 | Unknown #0x2e |  | Asserted
  15 | 03/04/2023 | 08:24:15 | Memory #0x02 | Uncorrectable ECC (UnCorrectable ECC |  DIMMA1) | Asserted
  16 | 03/04/2023 | 08:26:20 | Drive Slot / Bay #0xa1 | Drive Fault () | Asserted
  17 | 03/04/2023 | 08:26:22 | Drive Slot / Bay #0xa0 | Drive Fault () | Asserted
  18 | 03/04/2023 | 08:31:32 | Drive Slot / Bay #0xa0 | Drive Present () | Deasserted
  19 | 03/04/2023 | 08:31:33 | Drive Slot / Bay #0xa0 | Drive Fault () | Deasserted
  1a | 03/04/2023 | 08:31:43 | Drive Slot / Bay #0xa0 | Drive Present () | Asserted
  1b | 03/04/2023 | 08:31:51 | Drive Slot / Bay #0xa1 | Drive Present () | Deasserted
  1c | 03/04/2023 | 08:31:52 | Drive Slot / Bay #0xa1 | Drive Fault () | Deasserted
  1d | 03/04/2023 | 08:32:07 | Drive Slot / Bay #0xa1 | Drive Present () | Asserted
  1e | 03/04/2023 | 08:51:23 | Physical Security #0x73 | General Chassis intrusion () | Asserted
  1f | 03/04/2023 | 08:51:32 | Power Supply #0x63 | Power Supply AC lost | Asserted
  20 | 03/04/2023 | 08:54:32 | OS Boot | C: boot completed | Asserted
  21 | 03/04/2023 | 08:54:32 | OEM record dc | 000137 | 003907036400
  22 | 03/04/2023 | 08:55:30 | OS Critical Stop | OS graceful shutdown | Asserted
  23 | 03/04/2023 | 08:55:30 | OEM record dd | 000137 | 00ff00050000
  24 | 03/04/2023 | 09:13:39 | Physical Security #0x73 | General Chassis intrusion () | Asserted
  25 | 03/04/2023 | 09:13:45 | Physical Security #0x73 | General Chassis intrusion () | Deasserted
  26 | 03/06/2023 | 04:05:07 | Physical Security #0x73 | General Chassis intrusion () | Asserted
  27 | 03/06/2023 | 04:05:14 | Power Supply #0x62 | Power Supply AC lost | Asserted
  28 | 03/06/2023 | 04:15:29 | Physical Security #0x73 | General Chassis intrusion () | Asserted
  29 | 03/06/2023 | 04:15:35 | Physical Security #0x73 | General Chassis intrusion () | Deasserted
  2a | 03/06/2023 | 04:24:04 | Drive Slot / Bay #0xa7 | Drive Fault () | Asserted
  2b | 03/06/2023 | 04:24:06 | Drive Slot / Bay #0xa6 | Drive Fault () | Asserted
  2c | 03/06/2023 | 04:26:14 | Drive Slot / Bay #0xa7 | Drive Fault () | Deasserted
  2d | 03/06/2023 | 04:26:16 | Drive Slot / Bay #0xa6 | Drive Fault () | Deasserted
  2e | 03/06/2023 | 04:28:39 | Drive Slot / Bay #0xa7 | Drive Fault () | Asserted
  2f | 03/06/2023 | 04:28:41 | Drive Slot / Bay #0xa6 | Drive Fault () | Asserted
  30 | 03/06/2023 | 04:34:01 | Drive Slot / Bay #0xa6 | Drive Present () | Deasserted
  31 | 03/06/2023 | 04:34:02 | Drive Slot / Bay #0xa6 | Drive Fault () | Deasserted
  32 | 03/06/2023 | 04:34:05 | Drive Slot / Bay #0xa7 | Drive Present () | Deasserted
  33 | 03/06/2023 | 04:34:06 | Drive Slot / Bay #0xa7 | Drive Fault () | Deasserted
  34 | 03/06/2023 | 04:39:29 | Drive Slot / Bay #0xa6 | Drive Present () | Asserted
  35 | 03/06/2023 | 04:41:27 | Drive Slot / Bay #0xa7 | Drive Present () | Asserted
  36 | 04/11/2023 | 07:59:20 | Drive Slot / Bay #0xa4 | Drive Fault () | Asserted
  37 | 04/12/2023 | 07:07:18 | Drive Slot / Bay #0xa4 | Drive Present () | Deasserted
  38 | 04/12/2023 | 07:07:18 | Drive Slot / Bay #0xa4 | Drive Fault () | Deasserted
  39 | 04/20/2023 | 08:44:07 | Drive Slot / Bay #0xa4 | Drive Present () | Asserted
  3a | 05/26/2023 | 02:29:22 | Power Supply #0x63 | Power Supply AC lost | Asserted
  3b | 05/26/2023 | 02:29:53 | Power Supply #0x63 | Power Supply AC lost | Deasserted
  3c | 05/26/2023 | 03:26:34 | Power Supply #0x63 | Power Supply AC lost | Asserted
  3d | 05/26/2023 | 06:56:07 | Power Supply #0x63 | Power Supply AC lost | Deasserted
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!