System Hang at 'Reached target Reboot'

Apr 26, 2018
111
10
23
Proxmox 5.4, fully updated, kernel: 4.15.18-12. We do not use OOB or clustering. pveversion output below.

We have a Proxmox system that about every other reboot hangs at the "Reached target Reboot" stdout message. Left unattended, on a full-fledged hang the system finally reboots at about 30 minutes. The system logs confirm the 30 minutes. That period hardly seems coincidental -- as though hard-coded somewhere.

The root cause for the delay is not yet known. Searching the web finds many similar reports and indicates two possible common causes. One is the NFS client hanging and the other is the swap partition not unmounting.

Yesterday I locally rebooted the server after updates. Before starting the reboot I opened a debug-shell console (systemctl start debug-shell).

The system again seemed to hang at 'Reached target Reboot'. I toggled to the debug console and found swap not yet unmounted. I don't know if that is normal. I barely had time to investigate further because at that moment, the system continued with the reboot.

This time the system did not hang the full 30 minutes, but I also was unable to collect or view any meaningful data. I cannot say for certain the several second delay witnessed yesterday is normal.

Like many VM servers, we do not reboot often. Collecting data is challenge. Without resolving this delay we cannot reboot this system remotely.

Any ideas of the root cause of this delay?

Thanks much. :)

pveversion -v:
Code:
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
pve-kernel-4.15.18-11-pve: 4.15.18-34
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
Could you post some Log Entries (syslog might be enough) and fstab, fdisk etc.?
 
he system again seemed to hang at 'Reached target Reboot'. I toggled to the debug console and found swap not yet unmounted. I don't know if that is normal. I barely had time to investigate further because at that moment, the system continued with the reboot.
what kind of swap-device do you use? (on a hunch: if it is a zvol, this can cause hangs and deadlocks, and we do not recommend this setup anymore)
 
Could you post some Log Entries (syslog might be enough) and fstab, fdisk etc.?
Nothing useful to post. The last two reboots did not hang for the infamous 30 minutes.

what kind of swap-device do you use?
A traditional swap partition created by the Proxmox installer. We don't use ZFS.

The frustrating part now is I am "gun shy." I've been bitten three times by the 30 minute hang and am now reluctant to reboot remotely. This costs us time and money to reboot locally. :(
 
Me too. Unfortunately that it will randomly happened.

My /etc/fstab (it's normal swap partition on SSD, same device as root)
/dev/pve/root / xfs defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
Just for the record, this reboot hang hit us again on this same server. No clues in the logs. Again, after 30 minutes the system finally rebooted.

I suspect the 30 minutes is a systemd timeout.

We have some NFS mount preconfigured in storage.cfg, but the mounts are disabled. The mounts are preconfigured for use in the event of disaster recovery to be able to restore files from backup servers. With no active NFS mounts that seems to lean toward swap not being unmounted properly.

Debugging this is a challenge because the system only seldom hangs. Thus far when rebooting the server locally the server has yet to hang. When the system hangs from a remote reboot, the drive time is more than 30 minutes so trying to use an alternate debug-shell is futile. :)
 
Last edited:
* You could try to setup a syslog to a remote destination (or remote journalling)
* make sure you have enabled persistent journalling

* If the server is in a remote location - does it have some kind of out-of-band management? (IPMI, iDrac, iKVM, anything your particular ISP/hoster offers, someone working in the remote location with a camera?) - then you could use that for getting some output
* if you have a second machine in that location setting up a serial console with a crossover cable could also provide some insight

hope this helps!
 
Hi,

FYI, I have had the same experience with PM 5 after fully updating it around month or two ago.
I rebooted it remotely and it did not come up. I drove to the office (1 a.m. - the life of sysadmins :-( )
Just when I unlocked the office door, around 30 minute mark, the monitoring system sent me SMS that it is back.
Logs revealed nothing. Another reboot revealed nothing and it was fast as expected. Haven't updated it since.
Will follow this thread to see what you guys find out.
 
I just have had the same experience right now with the same Supermicro AMD hardware type but on a different server.
Took exactly 30 minutes to come back. I failed to access IPMI or attach screen in time.
 
Logs show nothing interesting:
Code:
Sep 10 06:27:45 lic2 systemd[1]: Stopped PVE API Proxy Server.
Sep 10 06:27:45 lic2 systemd[1]: Stopping OpenBSD Secure Shell server...
Sep 10 06:27:45 lic2 systemd[1]: Stopped target PVE Storage Target.
Sep 10 06:27:45 lic2 systemd[1]: Stopped OpenBSD Secure Shell server.
Sep 10 06:27:45 lic2 systemd[1]: Stopped PVE Cluster Ressource Manager Daemon.
Sep 10 06:27:45 lic2 systemd[1]: Stopping PVE API Daemon...
Sep 10 07:00:20 lic2 systemd-modules-load[1430]: Inserted module 'iscsi_tcp'
Sep 10 07:00:20 lic2 kernel: [    0.000000] Linux version 4.15.18-20-pve (root@nora) (gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)) #1 SMP PVE 4.15.18-46 (Thu, 8 Aug 2019 10:42:0
Sep 10 07:00:20 lic2 systemd-modules-load[1430]: Inserted module 'ib_iser'
Sep 10 07:00:20 lic2 kernel: [    0.000000] Command line: BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-4.15.18-20-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
Sep 10 07:00:20 lic2 kernel: [    0.000000] KERNEL supported cpus:
Sep 10 07:00:20 lic2 kernel: [    0.000000]   Intel GenuineIntel
Sep 10 07:00:20 lic2 kernel: [    0.000000]   AMD AuthenticAMD
Sep 10 07:00:20 lic2 systemd-modules-load[1430]: Inserted module 'vhost_net'
 
FYI I noticed that on all these servers where we hit this 30 minutes reboot time, IPMI stopped working also.
 
FYI I noticed that on all these servers where we hit this 30 minutes reboot time, IPMI stopped working also.
That might indicated that the problem is not related to the installed system - IPMI should work as long as the box has power?
Is the IPMI on a dedicated interface or shared with the host?

Else The logs might not show the whole information - if the host crashes on shutdown the last minutes might not make it to disk

I hope this helps!
 
I concur, that is why i mentioned it.
The system might have waited for IMPI which failed to run and continued to boot Linux after some timeout.
IPMI is on dedicated network interface.

Logs show a clean shutdown from Linux point of view.
 
Sep 10 06:27:45 lic2 systemd[1]: Stopped PVE Cluster Ressource Manager Daemon.
Sep 10 06:27:45 lic2 systemd[1]: Stopping PVE API Daemon...
Sep 10 07:00:20 lic2 systemd-modules-load[1430]: Inserted module 'iscsi_tcp'
hmm - this seems like there are quite a few lines of log missing (comparing to my journal during a clean shutdown - usually there are many services to be stopped after pvedaemon (Stopping PVE API Daemon...) - e.g. all mounted filesystems get unmounted, network gets deactivated ....

last lines for a clean shutdown on my system look like:
Code:
Sep 13 21:10:45 node systemd[1]: Shutting down.
Sep 13 21:10:45 node kernel: printk: systemd-shutdow: 1 output lines suppressed due to ratelimiting
Sep 13 21:10:45 node systemd-shutdown[1]: Syncing filesystems and block devices.
Sep 13 21:10:45 node systemd-shutdown[1]: Sending SIGTERM to remaining processes...
Sep 13 21:10:45 node systemd-journald[6435]: Journal stopped

Does the IPMI log show anything of interest?
 
Oh, are there... ok I didn't notice. Thanks for pointing it out. Maybe there were some hanging processes then and systemd finally decited to reboot the server after 30 minutes.

IPMI has stopped working all together, I can not access it.

I can not reboot these servers just to test it out, so we are basically stuck with this issue here.
All i know is, that on next planned upgrade with reboot, I might have to wait 30 minutes.
I might come to the center to do it and observe the screen.

Luckily there are only two of these serves left in production and one is getting a replacement soon.

Also thank you for taking the time to help out further Stoiko. :-) But this issue is not that pressing anymore for me.
 
Some more information: with respect to my original post the cause is swap not unmounting. Today the server failed to reboot timely. I had enabled a debug console and confirmed the swap partition remained mounted. No clues in the logs. Again, just a normal swap partition, ZFS is not used.

Before rebooting I manually shut down (pct shutdown, qm shutdown) all guest systems.

In the debug console I was able to run swapoff -a. Took a while but the system then finished the original reboot after swap was unmounted.

Unfortunately I did not dig deep with possible root causes. :(

Next is to learn why, on this system only, the swap partition sometimes fails to unmount.

The swap symptoms have been reported a few years ago. There are other sporadic reports around the web.

I'm guessing the hang is related to the infrequency of reboots -- the longer the period between reboots the greater the chance of hanging. We lack sufficient data to define "long", but if there is a long period between reboots, I'm guessing swap is actually in use from paging memory and the reboot signal is not flushing swap, there is a process or file keeping swap in use, or not all processes are being terminated, keeping swap in use.

The mystery though is why only this system?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!