Crash / unresponsive every 2-3 weeks

philtao

New Member
Mar 11, 2025
2
0
1
40660 Messanges, France
Hi,
I have installed proxmox on a mini PC to run Home Assistant (VM) and other servers in LXC (MariadB, InfluxDB, Grafana, Nore-red) and learning a bit more every day. All is working fine except for three system crashes that it happened about 17 days after last reboot.
Any guidance to help me identify the cause(s) would be great.

Symptoms:
  • PVE, VM and LXC have diseappeared from the network, so cannot ssh
  • connecting a monitor to the server gives no hdmi input
Hardware / Proxmox:
  • Mini PC Acemagic S1 (Intel Adler Lake-N97), 16GB DDR4, 1TB M.2 SSD
  • External USB drive 1 TB (/dev/sdb ext4) used for daily backup of each VM and LXC
  • Kernel version: Linux 6.8.12-5-pve (2024-12-03T10:26Z)
  • Manager version: pve-manager/8.3.3/f157a38b211595d6
  • RAM usage stable at 27%
1741708928613.png

Logs:
The most frequent error logs between reboot and crash:

During boot there are a number of "bug" and error reported but based on other threads these might be known and can be ignored (??):
Code:
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PC00.TXHC.RHUB.SS01], AE_NOT_FOUND (20230628/dswload2-162)
Jan 29 18:01:15 pve kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20230628/psobject-220)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PC00.TXHC.RHUB.SS02], AE_NOT_FOUND (20230628/dswload2-162)
Jan 29 18:01:15 pve kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20230628/psobject-220)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.UBTC.RUCC], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_SB.PC00.XHCI.RHUB.HS01._PLD due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.UBTC.RUCC], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_SB.PC00.XHCI.RHUB.HS02._PLD due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PC00.LPCB.ITE8.GETT], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_TZ.TZ00._TMP due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PC00.LPCB.ITE8.GETT], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_TZ.TZ00._TMP due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.UBTC.RUCC], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_SB.PC00.XHCI.RHUB.HS01._PLD due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.UBTC.RUCC], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_SB.PC00.XHCI.RHUB.HS01._PLD due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.UBTC.RUCC], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_SB.PC00.XHCI.RHUB.HS02._PLD due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:15 pve kernel: ACPI BIOS Error (bug): Could not resolve symbol [\_SB.UBTC.RUCC], AE_NOT_FOUND (20230628/psargs-330)
Jan 29 18:01:15 pve kernel: ACPI Error: Aborting method \_SB.PC00.XHCI.RHUB.HS02._PLD due to previous error (AE_NOT_FOUND) (20230628/psparse-529)
Jan 29 18:01:16 pve kernel: usbhid 1-6:1.3: couldn't find an input interrupt endpoint
Jan 29 18:01:16 pve kernel: usbhid 1-8:1.1: couldn't find an input interrupt endpoint

Every day at 03:00:00 there is:
Code:
Feb 23 03:00:04 pve kernel: EXT4-fs (dm-12): write access unavailable, skipping orphan cleanup
That corresponds to the start of a LXC backup (102) - note that 4 backups to ext4 already succeeded between 01:00 and 02:30
Could it be the accumulation of these "skipping orphan cleanup" that eventually trigger a crash
but according to this thread: "The "write access unavailable, skipping orphan cleanup" message is apparently just a byproduct of the snapshot being read-only, and you can ignore it [2]."

Happened a few times in last 2 months that series of errors occur like:
Code:
Feb 06 09:55:33 pve kernel: BUG: Bad page map in process pvestatd  pte:8400004224c47805 pmd:1641e2067
Feb 06 09:55:33 pve kernel: addr:0000587921eb1000 vm_flags:08100073 anon_vma:ffff95e7e22e01a0 mapping:0000000000000000 index:587921eb1
Feb 06 09:55:33 pve kernel: file:(null) fault:0x0 mmap:0x0 read_folio:0x0

Feb 15 07:40:39 pve kernel: BUG: unable to handle page fault for address: ffff95269b976f60
Feb 15 07:40:39 pve kernel: #PF: supervisor read access in kernel mode
Feb 15 07:40:39 pve kernel: #PF: error_code(0x0000) - not-present page
Feb 15 07:40:39 pve pve-firewall[974]: status update error: command 'ipset save' failed: got signal 9
Feb 15 07:40:49 pve kernel: BUG: unable to handle page fault for address: ffff95269b976f60
Feb 15 07:40:49 pve kernel: #PF: supervisor read access in kernel mode
Feb 15 07:40:49 pve kernel: #PF: error_code(0x0000) - not-present page
Feb 15 07:40:49 pve kernel: pstore: backend (efi_pstore) writing error (-5)

Feb 27 04:09:32 pve kernel: get_swap_device: Bad swap offset entry 3ffff9fffffff
Feb 27 04:09:32 pve kernel: BUG: Bad page map in process pvestatd  pte:c000000000 pmd:1166a8067
Feb 27 04:09:32 pve kernel: addr:000070b482400000 vm_flags:08000075 anon_vma:0000000000000000 mapping:ffff94fd8b6b9520 index:3f
Feb 27 04:09:32 pve kernel: file:libsystemd.so.0.35.0 fault:filemap_fault mmap:ext4_file_mmap read_folio:ext4_read_folio
Feb 27 04:09:32 pve kernel: get_swap_device: Bad swap offset entry 3ffffbfffffff

Here is the log for few hours before the last crash and hard power down + reboot:
Code:
Mar 11 00:00:12 pve systemd[1]: Starting dpkg-db-backup.service - Daily dpkg database backup service...
Mar 11 00:00:12 pve systemd[1]: Starting logrotate.service - Rotate log files...
Mar 11 00:00:12 pve systemd[1]: dpkg-db-backup.service: Deactivated successfully.
Mar 11 00:00:12 pve systemd[1]: Finished dpkg-db-backup.service - Daily dpkg database backup service.
Mar 11 00:00:12 pve systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Mar 11 00:00:13 pve pveproxy[2218983]: send HUP to 1018
Mar 11 00:00:13 pve pveproxy[1018]: received signal HUP
Mar 11 00:00:13 pve pveproxy[1018]: server closing
Mar 11 00:00:13 pve pveproxy[1018]: server shutdown (restart)
Mar 11 00:00:13 pve systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Mar 11 00:00:13 pve systemd[1]: Reloading spiceproxy.service - PVE SPICE Proxy Server...
Mar 11 00:00:13 pve spiceproxy[2218992]: send HUP to 1023
Mar 11 00:00:13 pve systemd[1]: Reloaded spiceproxy.service - PVE SPICE Proxy Server.
Mar 11 00:00:13 pve spiceproxy[1023]: received signal HUP
Mar 11 00:00:13 pve spiceproxy[1023]: server closing
Mar 11 00:00:13 pve spiceproxy[1023]: server shutdown (restart)
Mar 11 00:00:13 pve systemd[1]: Stopping pvefw-logger.service - Proxmox VE firewall logger...
Mar 11 00:00:13 pve pvefw-logger[1824567]: received terminate request (signal)
Mar 11 00:00:13 pve pvefw-logger[1824567]: stopping pvefw logger
Mar 11 00:00:13 pve systemd[1]: pvefw-logger.service: Deactivated successfully.
Mar 11 00:00:13 pve systemd[1]: Stopped pvefw-logger.service - Proxmox VE firewall logger.
Mar 11 00:00:13 pve systemd[1]: pvefw-logger.service: Consumed 18.252s CPU time.
Mar 11 00:00:13 pve systemd[1]: Starting pvefw-logger.service - Proxmox VE firewall logger...
Mar 11 00:00:13 pve systemd[1]: Started pvefw-logger.service - Proxmox VE firewall logger.
Mar 11 00:00:13 pve pvefw-logger[2219006]: starting pvefw logger
Mar 11 00:00:13 pve systemd[1]: logrotate.service: Deactivated successfully.
Mar 11 00:00:13 pve systemd[1]: Finished logrotate.service - Rotate log files.
Mar 11 00:00:13 pve spiceproxy[1023]: restarting server
Mar 11 00:00:13 pve spiceproxy[1023]: starting 1 worker(s)
Mar 11 00:00:13 pve spiceproxy[1023]: worker 2219017 started
Mar 11 00:00:14 pve pveproxy[1018]: restarting server
Mar 11 00:00:14 pve pveproxy[1018]: starting 3 worker(s)
Mar 11 00:00:14 pve pveproxy[1018]: worker 2219027 started
Mar 11 00:00:14 pve pveproxy[1018]: worker 2219028 started
Mar 11 00:00:14 pve pveproxy[1018]: worker 2219029 started
Mar 11 00:00:18 pve spiceproxy[1430903]: worker exit
Mar 11 00:00:19 pve pveproxy[2142236]: worker exit
Mar 11 00:00:19 pve pveproxy[2141788]: worker exit
Mar 11 00:00:19 pve pveproxy[2140072]: worker exit
Mar 11 00:00:19 pve pveproxy[1018]: worker 2140072 finished
Mar 11 00:00:19 pve pveproxy[1018]: worker 2141788 finished
Mar 11 00:00:19 pve pveproxy[1018]: worker 2142236 finished
Mar 11 00:00:19 pve spiceproxy[1023]: worker 1430903 finished
Mar 11 00:01:38 pve systemd[1]: Starting apt-daily.service - Daily apt download activities...
Mar 11 00:01:38 pve systemd[1]: apt-daily.service: Deactivated successfully.
Mar 11 00:01:38 pve systemd[1]: Finished apt-daily.service - Daily apt download activities.
Mar 11 00:17:01 pve CRON[2223654]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Mar 11 00:17:01 pve CRON[2223655]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Mar 11 00:17:01 pve CRON[2223654]: pam_unix(cron:session): session closed for user root
Mar 11 00:24:01 pve CRON[2225541]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Mar 11 00:24:01 pve CRON[2225542]: (root) CMD (if [ $(date +%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi)
Mar 11 00:24:01 pve CRON[2225541]: pam_unix(cron:session): session closed for user root
Mar 11 01:00:01 pve pvescheduler[2235273]: <root@pam> starting task UPID:pve:00221B8A:08694842:67CF7D01:vzdump:104:root@pam:
Mar 11 01:00:01 pve pvescheduler[2235274]: INFO: starting new backup job: vzdump 104 --storage backup --prune-backups 'keep-daily=7,keep-monthly=3,kee>
Mar 11 01:00:01 pve pvescheduler[2235274]: INFO: Starting Backup of VM 104 (lxc)
Mar 11 01:00:01 pve dmeventd[371]: No longer monitoring thin pool pve-data-tpool.
Mar 11 01:00:01 pve dmeventd[371]: Monitoring thin pool pve-data-tpool.
Mar 11 01:00:01 pve kernel: EXT4-fs (dm-12): mounted filesystem 2effa0f4-aa8e-470e-bb09-e72788b04fb0 ro without journal. Quota mode: none.
Mar 11 01:00:18 pve kernel: EXT4-fs (dm-12): unmounting filesystem 2effa0f4-aa8e-470e-bb09-e72788b04fb0.
Mar 11 01:00:18 pve pvescheduler[2235274]: INFO: Finished Backup of VM 104 (00:00:17)
Mar 11 01:00:18 pve pvescheduler[2235274]: INFO: Backup job finished successfully
Mar 11 01:17:01 pve CRON[2240057]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Mar 11 01:17:01 pve CRON[2240058]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Mar 11 01:17:01 pve CRON[2240057]: pam_unix(cron:session): session closed for user root
Mar 11 01:30:01 pve pvescheduler[2243600]: <root@pam> starting task UPID:pve:00223C11:086C0796:67CF8409:vzdump:110:root@pam:
Mar 11 01:30:01 pve pvescheduler[2243601]: INFO: starting new backup job: vzdump 110 --mailnotification failure --fleecing 0 --mailto jeanmairep@gmail>
Mar 11 01:30:01 pve pvescheduler[2243601]: INFO: Starting Backup of VM 110 (qemu)
Mar 11 01:30:02 pve systemd[1]: Started 110.scope.
Mar 11 01:30:02 pve kernel: tap110i0: entered promiscuous mode
Mar 11 01:30:02 pve kernel: vmbr0: port 7(fwpr110p0) entered blocking state
Mar 11 01:30:02 pve kernel: vmbr0: port 7(fwpr110p0) entered disabled state
Mar 11 01:30:02 pve kernel: fwpr110p0: entered allmulticast mode
Mar 11 01:30:02 pve kernel: fwpr110p0: entered promiscuous mode
Mar 11 01:30:02 pve kernel: vmbr0: port 7(fwpr110p0) entered blocking state
Mar 11 01:30:02 pve kernel: vmbr0: port 7(fwpr110p0) entered forwarding state
Mar 11 01:30:02 pve kernel: fwbr110i0: port 1(fwln110i0) entered blocking state
Mar 11 01:30:02 pve kernel: fwbr110i0: port 1(fwln110i0) entered disabled state
Mar 11 01:30:02 pve kernel: fwln110i0: entered allmulticast mode
Mar 11 01:30:02 pve kernel: fwln110i0: entered promiscuous mode
Mar 11 01:30:02 pve kernel: fwbr110i0: port 1(fwln110i0) entered blocking state
Mar 11 01:30:02 pve kernel: fwbr110i0: port 1(fwln110i0) entered forwarding state
Mar 11 01:30:02 pve kernel: fwbr110i0: port 2(tap110i0) entered blocking state
Mar 11 01:30:02 pve kernel: fwbr110i0: port 2(tap110i0) entered disabled state
Mar 11 01:30:02 pve kernel: tap110i0: entered allmulticast mode
Mar 11 01:30:02 pve kernel: fwbr110i0: port 2(tap110i0) entered blocking state
Mar 11 01:30:02 pve kernel: fwbr110i0: port 2(tap110i0) entered forwarding state
Mar 11 01:30:02 pve pvescheduler[2243601]: VM 110 started with PID 2243620.
Mar 11 01:35:11 pve kernel: tap110i0: left allmulticast mode
Mar 11 01:35:11 pve kernel: fwbr110i0: port 2(tap110i0) entered disabled state
Mar 11 01:35:11 pve kernel: fwbr110i0: port 1(fwln110i0) entered disabled state
Mar 11 01:35:11 pve kernel: vmbr0: port 7(fwpr110p0) entered disabled state
Mar 11 01:35:11 pve kernel: fwln110i0 (unregistering): left allmulticast mode
Mar 11 01:35:11 pve kernel: fwln110i0 (unregistering): left promiscuous mode
Mar 11 01:35:11 pve kernel: fwbr110i0: port 1(fwln110i0) entered disabled state
Mar 11 01:35:11 pve kernel: fwpr110p0 (unregistering): left allmulticast mode
Mar 11 01:35:11 pve kernel: fwpr110p0 (unregistering): left promiscuous mode
Mar 11 01:35:11 pve kernel: vmbr0: port 7(fwpr110p0) entered disabled state
Mar 11 01:35:11 pve qmeventd[660]: read: Connection reset by peer
Mar 11 01:35:11 pve systemd[1]: 110.scope: Deactivated successfully.
Mar 11 01:35:11 pve systemd[1]: 110.scope: Consumed 30.717s CPU time.
Mar 11 01:35:11 pve qmeventd[2245076]: Starting cleanup for 110
Mar 11 01:35:11 pve qmeventd[2245076]: Finished cleanup for 110
Mar 11 01:35:13 pve pvescheduler[2243601]: INFO: Finished Backup of VM 110 (00:05:12)
Mar 11 01:35:13 pve pvescheduler[2243601]: INFO: Backup job finished successfully
-- Boot 0a5cf5e892c849a0bf53ce665f7842e9 --
Mar 11 08:37:55 pve kernel: Linux version 6.8.12-5-pve (build@proxmox) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP P>
After the last entry before reboot other backups are scheduled at 02:00, 02:30, and 03:00 (the one generating the ext4 error)

I did run a Memtest86 to exclude potential memory problems and no errors are reported with 2 passes:
IMG_3764.jpg

Any suggestions for further diagnostics I could do to find the cause of these crashes...
 
Last edited:
I imagine it is most likely a HW issue - the best candidate being the power supply. External USBs can sometimes consume a lot of power especially on a Mini PC setup - this will reek havoc on the various buses/power rails of the system. It is also possible that the bus controller/s simply "can't keep up" under a lot of stress, copying different files & data (sometimes numerous & large) between the various attached devices & NW NICs. These Mini PCs usually come with awful chip controllers etc.

Next candidate in my opinion would be thermals, many chips under heavy load, simply don't function correctly or at all. To test this theory you could maybe open the casing (somewhat) of the PC to get better venting. Maybe even add a fan on the whole thing & see the effect.

Lastly to test if this is at all Proxmox-related, you could boot it up with a different OS (live media?) & throttle it with a stress test - with & without that USB drive.
 
Thanks for your analysis.
Power supply:
Although there is no log suggesting that a backup is in progress when the system crashes I will disconnect the external USB drive and find another way to backup.

Thermal:

The miniPC is in a room without heater with a temperature < 15°C, so I will open the casing as suggested.

Stress test:
I will see how I can do that and report here.

Does your response suggest that in the error logs there is nothing suspicious I should worry about?
 
Hi and welcome to the Proxmox forum, philtao!

During boot there are a number of "bug" and error reported but based on other threads these might be known and can be ignored
These ACPI errors could be anywhere in between informational to suggesting a hardware failure. Do you have the latest BIOS firmware for your mainboard installed? Does (temporarily) booting the opt-in 6.11 kernel change anything on these error messages?

Happened a few times in last 2 months that series of errors occur like:
Cannot really pinpoint any specific cause (especially with a limited function trace), but it seems like two storage-related page faults happened here. Just to check in on this, you could try to check the health of your hard drives and external disk.

Else, I would also go along with @gfngfn256 to check for a stress test. A well-known stress testing suite on linux is stress-ng.