Daily Proxmox node restarting automatically, And Provided logs here

manues50

Active Member
Jul 25, 2019
58
0
26
37
Oct 27 00:00:14 node011 systemd[1]: man-db.service: Succeeded.
Oct 27 00:00:14 node011 systemd[1]: Finished Daily man-db regeneration.
Oct 27 00:00:15 node011 systemd[1]: Reloading PVE API Proxy Server.
Oct 27 00:00:16 node011 pveproxy[1270503]: send HUP to 3700
Oct 27 00:00:16 node011 pveproxy[3700]: received signal HUP
Oct 27 00:00:16 node011 pveproxy[3700]: server closing
Oct 27 00:00:16 node011 pveproxy[3700]: server shutdown (restart)
Oct 27 00:00:16 node011 systemd[1]: Reloaded PVE API Proxy Server.
Oct 27 00:00:16 node011 systemd[1]: Reloading PVE SPICE Proxy Server.
Oct 27 00:00:16 node011 spiceproxy[1270510]: send HUP to 3724
Oct 27 00:00:16 node011 spiceproxy[3724]: received signal HUP
Oct 27 00:00:16 node011 spiceproxy[3724]: server closing
Oct 27 00:00:16 node011 spiceproxy[3724]: server shutdown (restart)
Oct 27 00:00:16 node011 systemd[1]: Reloaded PVE SPICE Proxy Server.
Oct 27 00:00:16 node011 systemd[1]: Stopping Proxmox VE firewall logger...
Oct 27 00:00:16 node011 pvefw-logger[709745]: received terminate request (signal)
Oct 27 00:00:16 node011 pvefw-logger[709745]: stopping pvefw logger
Oct 27 00:00:16 node011 spiceproxy[3724]: restarting server
Oct 27 00:00:16 node011 spiceproxy[3724]: starting 1 worker(s)
Oct 27 00:00:16 node011 spiceproxy[3724]: worker 1270527 started
Oct 27 00:00:16 node011 systemd[1]: pvefw-logger.service: Succeeded.
Oct 27 00:00:16 node011 systemd[1]: Stopped Proxmox VE firewall logger.
Oct 27 00:00:16 node011 systemd[1]: pvefw-logger.service: Consumed 7.537s CPU time.
Oct 27 00:00:16 node011 systemd[1]: Starting Proxmox VE firewall logger...
Oct 27 00:00:16 node011 systemd[1]: Started Proxmox VE firewall logger.
Oct 27 00:00:16 node011 pvefw-logger[1270529]: starting pvefw logger
Oct 27 00:00:16 node011 systemd[1]: logrotate.service: Succeeded.
Oct 27 00:00:16 node011 systemd[1]: Finished Rotate log files.
Oct 27 00:00:16 node011 systemd[1]: logrotate.service: Consumed 1.145s CPU time.
Oct 27 00:00:16 node011 pveproxy[3700]: restarting server
Oct 27 00:00:16 node011 pveproxy[3700]: starting 3 worker(s)
Oct 27 00:00:16 node011 pveproxy[3700]: worker 1270534 started
Oct 27 00:00:16 node011 pveproxy[3700]: worker 1270535 started
Oct 27 00:00:16 node011 pveproxy[3700]: worker 1270536 started
Oct 27 00:00:21 node011 spiceproxy[709743]: worker exit
 
I'm getting the same issue. Started 2 days a go..

My logs show, like @manues50, that pveproxy restarts at 00:00:02
Code:
Oct 27 00:00:00 cerberus2 systemd[1]: Starting dpkg-db-backup.service - Daily dpkg database backup service...
Oct 27 00:00:00 cerberus2 systemd[1]: Starting sanoid.service - Snapshot ZFS filesystems...
Oct 27 00:00:00 cerberus2 systemd[1]: Starting logrotate.service - Rotate log files...
Oct 27 00:00:00 cerberus2 systemd[1]: dpkg-db-backup.service: Deactivated successfully.
Oct 27 00:00:00 cerberus2 systemd[1]: Finished dpkg-db-backup.service - Daily dpkg database backup service.
Oct 27 00:00:00 cerberus2 systemd[1]: Reloading pveproxy.service - PVE API Proxy Server...
Oct 27 00:00:00 cerberus2 sanoid[979314]: INFO: taking snapshots...
Oct 27 00:00:00 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/subvol-102-disk-0@autosnap_2023-10-27_05:00:00_hourly
Oct 27 00:00:00 cerberus2 sanoid[979314]: taking snapshot rpool/ROOT/pve-1@autosnap_2023-10-27_05:00:00_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/subvol-106-disk-0@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/subvol-105-disk-0@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/vm-108-disk-0@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/vm-109-disk-0@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/vm-112-disk-0@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/subvol-111-disk-0@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/ROOT@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: taking snapshot rpool/enc-data/pbs-server-backup@autosnap_2023-10-27_05:00:01_hourly
Oct 27 00:00:01 cerberus2 sanoid[979314]: INFO: cache expired - updating from zfs list.
Oct 27 00:00:01 cerberus2 systemd[1]: sanoid.service: Deactivated successfully.
Oct 27 00:00:01 cerberus2 systemd[1]: Finished sanoid.service - Snapshot ZFS filesystems.
Oct 27 00:00:01 cerberus2 CRON[979714]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Oct 27 00:00:01 cerberus2 CRON[979715]: (root) CMD ([ -f /etc/sanoid/sanoid.conf ] && if [ ! -d /run/systemd/system ]; then TZ=UTC /usr/sbin/sanoid --cron --quiet; fi)
Oct 27 00:00:01 cerberus2 CRON[979714]: pam_unix(cron:session): session closed for user root
Oct 27 00:00:01 cerberus2 systemd[1]: Starting sanoid-prune.service - Prune ZFS snapshots...
Oct 27 00:00:01 cerberus2 sanoid[979716]: INFO: pruning snapshots...
Oct 27 00:00:01 cerberus2 systemd[1]: sanoid-prune.service: Deactivated successfully.
Oct 27 00:00:01 cerberus2 systemd[1]: Finished sanoid-prune.service - Prune ZFS snapshots.
Oct 27 00:00:01 cerberus2 pveproxy[979329]: send HUP to 2971
Oct 27 00:00:01 cerberus2 pveproxy[2971]: received signal HUP
Oct 27 00:00:01 cerberus2 pveproxy[2971]: server closing
Oct 27 00:00:01 cerberus2 pveproxy[2971]: server shutdown (restart)
Oct 27 00:00:01 cerberus2 systemd[1]: Reloaded pveproxy.service - PVE API Proxy Server.
Oct 27 00:00:01 cerberus2 systemd[1]: Reloading spiceproxy.service - PVE SPICE Proxy Server...
Oct 27 00:00:02 cerberus2 spiceproxy[979721]: send HUP to 2977
Oct 27 00:00:02 cerberus2 spiceproxy[2977]: received signal HUP
Oct 27 00:00:02 cerberus2 spiceproxy[2977]: server closing
Oct 27 00:00:02 cerberus2 spiceproxy[2977]: server shutdown (restart)
Oct 27 00:00:02 cerberus2 systemd[1]: Reloaded spiceproxy.service - PVE SPICE Proxy Server.
Oct 27 00:00:02 cerberus2 pvefw-logger[1002]: received terminate request (signal)
Oct 27 00:00:02 cerberus2 pvefw-logger[1002]: stopping pvefw logger
Oct 27 00:00:02 cerberus2 systemd[1]: Stopping pvefw-logger.service - Proxmox VE firewall logger...
Oct 27 00:00:02 cerberus2 systemd[1]: pvefw-logger.service: Deactivated successfully.
Oct 27 00:00:02 cerberus2 systemd[1]: Stopped pvefw-logger.service - Proxmox VE firewall logger.
Oct 27 00:00:02 cerberus2 systemd[1]: pvefw-logger.service: Consumed 4.161s CPU time.
Oct 27 00:00:02 cerberus2 spiceproxy[2977]: restarting server
Oct 27 00:00:02 cerberus2 spiceproxy[2977]: starting 1 worker(s)
Oct 27 00:00:02 cerberus2 systemd[1]: Starting pvefw-logger.service - Proxmox VE firewall logger...
Oct 27 00:00:02 cerberus2 spiceproxy[2977]: worker 979739 started
Oct 27 00:00:02 cerberus2 systemd[1]: Started pvefw-logger.service - Proxmox VE firewall logger.
Oct 27 00:00:02 cerberus2 pvefw-logger[979740]: starting pvefw logger
Oct 27 00:00:02 cerberus2 systemd[1]: logrotate.service: Deactivated successfully.
Oct 27 00:00:02 cerberus2 systemd[1]: Finished logrotate.service - Rotate log files.
Oct 27 00:00:02 cerberus2 tailscaled[1315]: magicsock: closing connection to derp-16 (idle), age 1m0s
Oct 27 00:00:02 cerberus2 tailscaled[1315]: magicsock: 1 active derp conns: derp-9=cr11h6m0s,wr4m0s
Oct 27 00:00:02 cerberus2 pveproxy[2971]: restarting server
Oct 27 00:00:02 cerberus2 pveproxy[2971]: starting 3 worker(s)
Oct 27 00:00:02 cerberus2 pveproxy[2971]: worker 979754 started
Oct 27 00:00:02 cerberus2 pveproxy[2971]: worker 979755 started
Oct 27 00:00:02 cerberus2 pveproxy[2971]: worker 979756 started
Oct 27 00:00:04 cerberus2 tailscaled[1315]: Accept: TCP{100.73.233.151:60662 > 100.109.147.75:445} 124 ok out
Oct 27 00:00:04 cerberus2 tailscaled[1315]: magicsock: adding connection to derp-16 for [MeeRF]
Oct 27 00:00:04 cerberus2 tailscaled[1315]: magicsock: 2 active derp conns: derp-9=cr11h6m0s,wr4m0s derp-16=cr0s,wr0s
Oct 27 00:00:04 cerberus2 tailscaled[1315]: derphttp.Client.Recv: connecting to derp-16 (mia)
Oct 27 00:00:04 cerberus2 tailscaled[1315]: Accept: TCP{100.109.147.75:445 > 100.73.233.151:60662} 124 tcp non-syn
Oct 27 00:00:04 cerberus2 tailscaled[1315]: Accept: TCP{100.73.233.151:60662 > 100.109.147.75:445} 52 ok out
Oct 27 00:00:04 cerberus2 tailscaled[1315]: magicsock: derp-16 connected; connGen=1
Oct 27 00:00:07 cerberus2 spiceproxy[2978]: worker exit
Oct 27 00:00:07 cerberus2 spiceproxy[2977]: worker 2978 finished
Oct 27 00:00:07 cerberus2 pveproxy[923686]: worker exit
Oct 27 00:00:07 cerberus2 pveproxy[911930]: worker exit
Oct 27 00:00:07 cerberus2 pveproxy[914689]: worker exit
Oct 27 00:00:07 cerberus2 pveproxy[2971]: worker 911930 finished
Oct 27 00:00:07 cerberus2 pveproxy[2971]: worker 923686 finished
Oct 27 00:00:07 cerberus2 pveproxy[2971]: worker 914689 finished

What can cause pveproxy to automatically restart?
 
Last edited:
I was able to trace my issue back to a newly created LXC Container. The container was the only new addition to my node after the restart started to occur. I don't currently understand why the container is the cause.

But for now, disabling the new container is a good enough work around for me since it's not running anything essential.

Have you created any new LXC Containers lately? I wonder if your restarts have the same source as mine..