Stuck on maintenance mode

Ribcage8189

New Member
Feb 24, 2025
11
0
1
Hi everyone, I recently added a nic to one of my nodes and switched the primary nic for proxmox.
On reboot the node went into maintenance mode for HA and I can't disable maintenance mode with

ha-manager crm-command node-maintenance disable

I've tried adding/removing HA jobs to no avail.

Still says:
root@pve:~# ha-manager status
quorum OK
master pve-2 (active, Thu Feb 27 18:25:51 2025)
lrm pve (maintenance mode, Wed Feb 26 20:45:49 2025)
lrm pve-2 (idle, Thu Feb 27 18:25:52 2025)

Any way to forcibly disable maintenance mode? It seems to affect HA migrations.
If it matters, I am using 2 nodes + a q-device.

1740634083301.png
 
Reboot the node in maintenance mode and try again do disable.
 
Hi,
please share the system journal from the node pve around the time the issue occurred and around the time you try to issue the CRM command as well as the output of pveversion -v
 
Thank you!

This is the output from journalctl when I issued the CRM command just now, I also noticed it says it failed SMART, but the UI says passed, I don't imagine it has anything to do with the maintenance mode?
root@pve:/var/log/pve# ha-manager crm-command node-maintenance disable pve
root@pve:/var/log/pve# journalctl --since "2025-02-28 11:15:00" --until "2025-02-28 12:00:00"
Feb 28 11:17:01 pve CRON[469094]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 28 11:17:01 pve CRON[469095]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Feb 28 11:17:01 pve CRON[469094]: pam_unix(cron:session): session closed for user root
Feb 28 11:17:15 pve pmxcfs[1162]: [status] notice: received log
Feb 28 11:17:15 pve pmxcfs[1162]: [status] notice: received log
Feb 28 11:17:15 pve sshd[469310]: Accepted publickey for root from 192.168.86.125 port 36868 ssh2: RSA SHA256:LD53N/+J6uyd8SWm8nJEHp0xaXbP7RH>
Feb 28 11:17:15 pve sshd[469310]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Feb 28 11:17:15 pve systemd-logind[851]: New session 145 of user root.
Feb 28 11:17:15 pve systemd[1]: Started session-145.scope - Session 145 of User root.
Feb 28 11:17:15 pve sshd[469310]: pam_env(sshd:session): deprecated reading of user environment enabled
Feb 28 11:17:16 pve login[469320]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Feb 28 11:17:16 pve login[469325]: ROOT LOGIN on '/dev/pts/4' from '192.168.86.125'
Feb 28 11:18:42 pve pveproxy[446728]: worker exit
Feb 28 11:18:42 pve pveproxy[1318]: worker 446728 finished
Feb 28 11:18:42 pve pveproxy[1318]: starting 1 worker(s)
Feb 28 11:18:42 pve pveproxy[1318]: worker 470261 started
Feb 28 11:19:57 pve smartd[848]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 52 to 51
Feb 28 11:19:57 pve smartd[848]: Device: /dev/sda [SAT], Failed SMART usage Attribute: 202 Percent_Lifetime_Remain.
1740695519651.png




I think this is from when I changed the network port and got stuck on maintenance mode:
root@pve:/var/log/pve# journalctl --since "2025-02-26 15:00:00" --until "2025-02-27 00:00:00"
Feb 26 21:26:36 pve systemd-journald[395]: Oldest entry in /var/log/journal/5fd9f3d7083f4875af2a6c9ef3a7a316/system.journal is older than the>
Feb 26 21:26:36 pve systemd-journald[395]: /var/log/journal/5fd9f3d7083f4875af2a6c9ef3a7a316/system.journal: Journal header limits reached or>
Feb 26 21:26:36 pve systemd[1]: Starting dpkg-db-backup.service - Daily dpkg database backup service...
Dec 02 02:30:07 pve chronyd[1222]: System clock wrong by 7498589.132781 seconds
Feb 26 21:26:36 pve systemd[1]: Starting e2scrub_all.service - Online ext4 Metadata Check for All Filesystems...
Feb 26 21:26:36 pve chronyd[1222]: System clock was stepped by 7498589.132781 seconds
Feb 26 21:26:36 pve systemd[1]: Starting logrotate.service - Rotate log files...
Feb 26 21:26:36 pve chronyd[1222]: System clock TAI offset set to 37 seconds
Feb 26 21:26:36 pve systemd[1]: e2scrub_all.service: Deactivated successfully.
Feb 26 21:26:36 pve spiceproxy[1464]: starting server
Feb 26 21:26:36 pve systemd[1]: Finished e2scrub_all.service - Online ext4 Metadata Check for All Filesystems.
Feb 26 21:26:36 pve spiceproxy[1464]: starting 1 worker(s)
Feb 26 21:26:36 pve systemd[1]: dpkg-db-backup.service: Deactivated successfully.
Feb 26 21:26:36 pve spiceproxy[1464]: worker 1465 started
Feb 26 21:26:36 pve systemd[1]: Finished dpkg-db-backup.service - Daily dpkg database backup service.
Feb 26 21:26:36 pve systemd[1]: Started spiceproxy.service - PVE SPICE Proxy Server.
Feb 26 21:26:36 pve systemd[1]: Starting pve-guests.service - PVE guests...
Feb 26 21:26:36 pve systemd[1]: logrotate.service: Deactivated successfully.
Feb 26 21:26:36 pve systemd[1]: Finished logrotate.service - Rotate log files.
Feb 26 21:26:37 pve pmxcfs[1289]: [dcdb] notice: data verification successful
Feb 26 21:26:38 pve pve-guests[1490]: <root@pam> starting task UPID:pve:000005E8:00002921:67BED03E:startall::root@pam:
Feb 26 21:26:38 pve pvesh[1490]: Starting VM 1011
Feb 26 21:26:38 pve pve-guests[1513]: start VM 1011: UPID:pve:000005E9:00002924:67BED03E:qmstart:1011:root@pam:
Feb 26 21:26:38 pve pve-guests[1512]: <root@pam> starting task UPID:pve:000005E9:00002924:67BED03E:qmstart:1011:root@pam:
Feb 26 21:26:38 pve systemd[1]: Created slice qemu.slice - Slice /qemu.
Feb 26 21:26:38 pve systemd[1]: Started 1011.scope.
Feb 26 21:26:40 pve kernel: tap1011i0: entered promiscuous mode
Feb 26 21:26:40 pve kernel: vmbr0v10: port 3(fwpr1011p0) entered blocking state
Feb 26 21:26:40 pve kernel: vmbr0v10: port 3(fwpr1011p0) entered disabled state
Feb 26 21:26:40 pve kernel: fwpr1011p0: entered allmulticast mode
Feb 26 21:26:40 pve kernel: fwpr1011p0: entered promiscuous mode
Feb 26 21:26:40 pve kernel: vmbr0v10: port 3(fwpr1011p0) entered blocking state
Feb 26 21:26:40 pve kernel: vmbr0v10: port 3(fwpr1011p0) entered forwarding state
Feb 26 21:26:40 pve kernel: fwbr1011i0: port 1(fwln1011i0) entered blocking state
Feb 26 21:26:40 pve kernel: fwbr1011i0: port 1(fwln1011i0) entered disabled state
Feb 26 21:26:40 pve kernel: fwln1011i0: entered allmulticast mode
Feb 26 21:26:40 pve kernel: fwln1011i0: entered promiscuous mode
Feb 26 21:26:40 pve kernel: fwbr1011i0: port 1(fwln1011i0) entered blocking state
Feb 26 21:26:40 pve kernel: fwbr1011i0: port 1(fwln1011i0) entered forwarding state
Output of pveversion -v
root@pve:/var/log/pve# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.4 (running version: 8.3.4/65224a0f9cd294a3)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-7-pve-signed: 6.8.12-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.3-1
proxmox-backup-file-restore: 3.3.3-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.4
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
 
Hi,
please share the system journal from the node pve around the time the issue occurred and around the time you try to issue the CRM command as well as the output of pveversion -v
Sorry, forgot to reply, please see above, thank you!
 
Oh sorry. I guess you need to check the journal from the other node, because that is the current HA master.

Do you have any HA services configured?
 
Oh sorry. I guess you need to check the journal from the other node, because that is the current HA master.

I've removed previous HA services, so nothing configured at the moment.

Journal output after CRM command:
root@pve-2:~# ha-manager crm-command node-maintenance disable pve
root@pve-2:~# journalctl --since "2025-03-01 01:15:00" --until "2025-03-01 2:00:00"
Mar 01 01:17:01 pve-2 CRON[715706]: pam_unix(cron:session): session opened for user >
Mar 01 01:17:01 pve-2 CRON[715707]: (root) CMD (cd / && run-parts --report /etc/cron>
Mar 01 01:17:01 pve-2 CRON[715706]: pam_unix(cron:session): session closed for user >
Mar 01 01:17:55 pve-2 systemd[1]: Starting man-db.service - Daily man-db regeneratio>
Mar 01 01:17:55 pve-2 systemd[1]: man-db.service: Deactivated successfully.
Mar 01 01:17:55 pve-2 systemd[1]: Finished man-db.service - Daily man-db regeneratio>
Mar 01 01:19:55 pve-2 pveproxy[705840]: Clearing outdated entries from certificate c>
Mar 01 01:20:07 pve-2 sshd[716844]: Accepted publickey for root from 192.168.86.126 >
Mar 01 01:20:07 pve-2 sshd[716844]: pam_unix(sshd:session): session opened for user >
Mar 01 01:20:07 pve-2 systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Mar 01 01:20:07 pve-2 systemd[1]: Starting user-runtime-dir@0.service - User Runtime>
Mar 01 01:20:07 pve-2 systemd-logind[778]: New session 237 of user root.
Mar 01 01:20:07 pve-2 systemd[1]: Finished user-runtime-dir@0.service - User Runtime>
Mar 01 01:20:07 pve-2 systemd[1]: Starting user@0.service - User Manager for UID 0...
Mar 01 01:20:07 pve-2 (systemd)[716847]: pam_unix(systemd-user:session): session ope>
Mar 01 01:20:07 pve-2 systemd[716847]: Queued start job for default target default.t>
Mar 01 01:20:07 pve-2 systemd[716847]: Created slice app.slice - User Application Sl>
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target paths.target - Paths.
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target timers.target - Timers.
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on dirmngr.socket - GnuPG network c>
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent-browser.socket - GnuPG>
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent-extra.socket - GnuPG c>
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent-ssh.socket - GnuPG cry>
lines 1-23...skipping...
Mar 01 01:17:01 pve-2 CRON[715706]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Mar 01 01:17:01 pve-2 CRON[715707]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Mar 01 01:17:01 pve-2 CRON[715706]: pam_unix(cron:session): session closed for user root
Mar 01 01:17:55 pve-2 systemd[1]: Starting man-db.service - Daily man-db regeneration...
Mar 01 01:17:55 pve-2 systemd[1]: man-db.service: Deactivated successfully.
Mar 01 01:17:55 pve-2 systemd[1]: Finished man-db.service - Daily man-db regeneration.
Mar 01 01:19:55 pve-2 pveproxy[705840]: Clearing outdated entries from certificate cache
Mar 01 01:20:07 pve-2 sshd[716844]: Accepted publickey for root from 192.168.86.126 port 40052 ssh2: RSA SHA256:N/U9IbFKEJMPgvAmykAEr0AEk/zaZ9WvNnCXTS55SPE
Mar 01 01:20:07 pve-2 sshd[716844]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Mar 01 01:20:07 pve-2 systemd[1]: Created slice user-0.slice - User Slice of UID 0.
Mar 01 01:20:07 pve-2 systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Mar 01 01:20:07 pve-2 systemd-logind[778]: New session 237 of user root.
Mar 01 01:20:07 pve-2 systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Mar 01 01:20:07 pve-2 systemd[1]: Starting user@0.service - User Manager for UID 0...
Mar 01 01:20:07 pve-2 (systemd)[716847]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Mar 01 01:20:07 pve-2 systemd[716847]: Queued start job for default target default.target.
Mar 01 01:20:07 pve-2 systemd[716847]: Created slice app.slice - User Application Slice.
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target paths.target - Paths.
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target timers.target - Timers.
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on dirmngr.socket - GnuPG network certificate management daemon.
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Mar 01 01:20:07 pve-2 systemd[716847]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target sockets.target - Sockets.
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target basic.target - Basic System.
Mar 01 01:20:07 pve-2 systemd[716847]: Reached target default.target - Main User Target.
Mar 01 01:20:07 pve-2 systemd[716847]: Startup finished in 118ms.
Mar 01 01:20:07 pve-2 systemd[1]: Started user@0.service - User Manager for UID 0.
Mar 01 01:20:07 pve-2 systemd[1]: Started session-237.scope - Session 237 of User root.
Mar 01 01:20:07 pve-2 sshd[716844]: pam_env(sshd:session): deprecated reading of user environment enabled
Mar 01 01:20:08 pve-2 sshd[716844]: Received disconnect from 192.168.86.126 port 40052:11: disconnected by user
Mar 01 01:20:08 pve-2 sshd[716844]: Disconnected from user root 192.168.86.126 port 40052
Mar 01 01:20:08 pve-2 sshd[716844]: pam_unix(sshd:session): session closed for user root
Mar 01 01:20:08 pve-2 systemd[1]: session-237.scope: Deactivated successfully.
Mar 01 01:20:08 pve-2 systemd-logind[778]: Session 237 logged out. Waiting for processes to exit.
Mar 01 01:20:08 pve-2 systemd-logind[778]: Removed session 237.
Mar 01 01:20:08 pve-2 sshd[716871]: Accepted publickey for root from 192.168.86.126 port 41878 ssh2: RSA SHA256:N/U9IbFKEJMPgvAmykAEr0AEk/zaZ9WvNnCXTS55SPE
Mar 01 01:20:08 pve-2 sshd[716871]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Mar 01 01:20:08 pve-2 systemd-logind[778]: New session 239 of user root.
Mar 01 01:20:08 pve-2 systemd[1]: Started session-239.scope - Session 239 of User root.
Mar 01 01:20:08 pve-2 sshd[716871]: pam_env(sshd:session): deprecated reading of user environment enabled
Mar 01 01:20:09 pve-2 sshd[716871]: Received disconnect from 192.168.86.126 port 41878:11: disconnected by user
Mar 01 01:20:09 pve-2 sshd[716871]: Disconnected from user root 192.168.86.126 port 41878
Mar 01 01:20:09 pve-2 sshd[716871]: pam_unix(sshd:session): session closed for user root
Mar 01 01:20:09 pve-2 systemd[1]: session-239.scope: Deactivated successfully.
Mar 01 01:20:09 pve-2 systemd-logind[778]: Session 239 logged out. Waiting for processes to exit.
Mar 01 01:20:09 pve-2 systemd-logind[778]: Removed session 239.
Mar 01 01:20:10 pve-2 sshd[716897]: Accepted publickey for root from 192.168.86.126 port 41886 ssh2: RSA SHA256:N/U9IbFKEJMPgvAmykAEr0AEk/zaZ9WvNnCXTS55SPE
Mar 01 01:20:10 pve-2 sshd[716897]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Mar 01 01:20:10 pve-2 systemd-logind[778]: New session 240 of user root.
Mar 01 01:20:10 pve-2 systemd[1]: Started session-240.scope - Session 240 of User root.
Mar 01 01:20:10 pve-2 sshd[716897]: pam_env(sshd:session): deprecated reading of user environment enabled
Mar 01 01:20:10 pve-2 sshd[716897]: Received disconnect from 192.168.86.126 port 41886:11: disconnected by user
Mar 01 01:20:10 pve-2 sshd[716897]: Disconnected from user root 192.168.86.126 port 41886
Mar 01 01:20:10 pve-2 sshd[716897]: pam_unix(sshd:session): session closed for user root
Mar 01 01:20:10 pve-2 systemd[1]: session-240.scope: Deactivated successfully.
Mar 01 01:20:10 pve-2 systemd-logind[778]: Session 240 logged out. Waiting for processes to exit.
Mar 01 01:20:10 pve-2 systemd-logind[778]: Removed session 240.
Mar 01 01:20:11 pve-2 pveproxy[705965]: Clearing outdated entries from certificate cache
Mar 01 01:20:21 pve-2 systemd[1]: Stopping user@0.service - User Manager for UID 0...
Mar 01 01:20:21 pve-2 systemd[716847]: Activating special unit exit.target...
Mar 01 01:20:21 pve-2 systemd[716847]: Stopped target default.target - Main User Target.
Mar 01 01:20:21 pve-2 systemd[716847]: Stopped target basic.target - Basic System.
Mar 01 01:20:21 pve-2 systemd[716847]: Stopped target paths.target - Paths.
Mar 01 01:20:21 pve-2 systemd[716847]: Stopped target sockets.target - Sockets.
Mar 01 01:20:21 pve-2 systemd[716847]: Stopped target timers.target - Timers.
Mar 01 01:20:21 pve-2 systemd[716847]: Closed dirmngr.socket - GnuPG network certificate management daemon.
Mar 01 01:20:21 pve-2 systemd[716847]: Closed gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
Mar 01 01:20:21 pve-2 systemd[716847]: Closed gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
Mar 01 01:20:21 pve-2 systemd[716847]: Closed gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Mar 01 01:20:21 pve-2 systemd[716847]: Closed gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Mar 01 01:20:21 pve-2 systemd[716847]: Removed slice app.slice - User Application Slice.
Mar 01 01:20:21 pve-2 systemd[716847]: Reached target shutdown.target - Shutdown.
Mar 01 01:20:21 pve-2 systemd[716847]: Finished systemd-exit.service - Exit the Session.
Mar 01 01:20:21 pve-2 systemd[716847]: Reached target exit.target - Exit the Session.
Mar 01 01:20:21 pve-2 systemd[1]: user@0.service: Deactivated successfully.
 
Last edited:
Journal output after around the time of nic change:
root@pve-2:~# journalctl --since "2025-02-26 15:00:00" --until "2025-02-27 00:00:00"
Feb 26 15:01:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:10:41 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:14:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:17:01 pve-2 CRON[1868519]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 26 15:17:01 pve-2 CRON[1868520]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Feb 26 15:17:01 pve-2 CRON[1868519]: pam_unix(cron:session): session closed for user root
Feb 26 15:17:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:21:07 pve-2 pmxcfs[1006]: [dcdb] notice: data verification successful
Feb 26 15:25:41 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:30:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:33:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:41:41 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:45:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:47:14 pve-2 postfix/qmgr[1220113]: 97723180A6: from=<root@pve-2.home.arpa>, size=622, nrcpt=1 (queue active)
Feb 26 15:47:14 pve-2 postfix/qmgr[1220113]: EF55F2F9B7: from=<>, size=3193, nrcpt=1 (queue active)
Feb 26 15:47:14 pve-2 postfix/qmgr[1220113]: D982821012: from=<>, size=3193, nrcpt=1 (queue active)
Feb 26 15:47:14 pve-2 postfix/qmgr[1220113]: 6F52C2889B: from=<>, size=2520, nrcpt=1 (queue active)
Feb 26 15:47:14 pve-2 postfix/qmgr[1220113]: 6B3052C53D: from=<>, size=22174, nrcpt=1 (queue active)
Feb 26 15:47:14 pve-2 postfix/local[1886419]: error: open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886419]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886419]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 15:47:14 pve-2 postfix/local[1886420]: error: open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886420]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886420]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 15:47:14 pve-2 postfix/local[1886419]: 97723180A6: to=<root@pve-2.home.arpa>, orig_to=<root>, relay=local, delay=184657, delays=184657/0.01/0/0.01, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 15:47:14 pve-2 postfix/local[1886419]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886419]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 15:47:14 pve-2 postfix/local[1886420]: EF55F2F9B7: to=<root@pve-2.home.arpa>, relay=local, delay=391013, delays=391013/0.01/0/0, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 15:47:14 pve-2 postfix/local[1886420]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886420]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 15:47:14 pve-2 postfix/local[1886419]: D982821012: to=<root@pve-2.home.arpa>, relay=local, delay=131974, delays=131974/0.01/0/0, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 15:47:14 pve-2 postfix/local[1886419]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 15:47:14 pve-2 postfix/local[1886419]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 15:47:14 pve-2 postfix/local[1886420]: 6F52C2889B: to=<root@pve-2.home.arpa>, relay=local, delay=90211, delays=90211/0.01/0/0, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 15:47:14 pve-2 postfix/local[1886419]: 6B3052C53D: to=<root@pve-2.home.arpa>, relay=local, delay=186041, delays=186041/0.01/0/0, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 15:47:14 pve-2 postfix/local[1886423]: error: open database /etc/aliases.db: No such file or directory
Feb 26 15:49:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 15:57:41 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:01:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:05:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:07:14 pve-2 postfix/qmgr[1220113]: B6884308C3: from=<>, size=3193, nrcpt=1 (queue active)
Feb 26 16:07:14 pve-2 postfix/local[1898241]: error: open database /etc/aliases.db: No such file or directory
Feb 26 16:07:14 pve-2 postfix/local[1898241]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 16:07:14 pve-2 postfix/local[1898241]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 16:07:14 pve-2 postfix/local[1898241]: B6884308C3: to=<root@pve-2.home.arpa>, relay=local, delay=46610, delays=46610/0.01/0/0.01, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 16:13:40 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:17:01 pve-2 CRON[1904101]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Feb 26 16:17:01 pve-2 CRON[1904102]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Feb 26 16:17:01 pve-2 CRON[1904101]: pam_unix(cron:session): session closed for user root
Feb 26 16:17:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:21:07 pve-2 pmxcfs[1006]: [dcdb] notice: data verification successful
Feb 26 16:21:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:29:41 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:33:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:37:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:45:40 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:49:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:53:26 pve-2 pmxcfs[1006]: [status] notice: received log
Feb 26 16:57:14 pve-2 postfix/qmgr[1220113]: 97723180A6: from=<root@pve-2.home.arpa>, size=622, nrcpt=1 (queue active)
Feb 26 16:57:14 pve-2 postfix/qmgr[1220113]: EF55F2F9B7: from=<>, size=3193, nrcpt=1 (queue active)
Feb 26 16:57:14 pve-2 postfix/qmgr[1220113]: D982821012: from=<>, size=3193, nrcpt=1 (queue active)
Feb 26 16:57:14 pve-2 postfix/qmgr[1220113]: 6F52C2889B: from=<>, size=2520, nrcpt=1 (queue active)
Feb 26 16:57:14 pve-2 postfix/qmgr[1220113]: 6B3052C53D: from=<>, size=22174, nrcpt=1 (queue active)
Feb 26 16:57:14 pve-2 postfix/local[1927950]: error: open database /etc/aliases.db: No such file or directory
Feb 26 16:57:14 pve-2 postfix/local[1927950]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 16:57:14 pve-2 postfix/local[1927950]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 16:57:14 pve-2 postfix/local[1927951]: error: open database /etc/aliases.db: No such file or directory
Feb 26 16:57:14 pve-2 postfix/local[1927951]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 16:57:14 pve-2 postfix/local[1927951]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 16:57:14 pve-2 postfix/local[1927950]: 97723180A6: to=<root@pve-2.home.arpa>, orig_to=<root>, relay=local, delay=188857, delays=188857/0.01/0/0.01, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 16:57:14 pve-2 postfix/local[1927950]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 16:57:14 pve-2 postfix/local[1927950]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 16:57:14 pve-2 postfix/local[1927951]: EF55F2F9B7: to=<root@pve-2.home.arpa>, relay=local, delay=395212, delays=395212/0.01/0/0, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 16:57:14 pve-2 postfix/local[1927951]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory
Feb 26 16:57:14 pve-2 postfix/local[1927951]: warning: hash:/etc/aliases: lookup of 'root' failed
Feb 26 16:57:14 pve-2 postfix/local[1927950]: D982821012: to=<root@pve-2.home.arpa>, relay=local, delay=136173, delays=136173/0.01/0/0, dsn=4.3.0, status=deferred (alias database unavailable)
Feb 26 16:57:14 pve-2 postfix/local[1927950]: warning: hash:/etc/aliases is unavailable. open database /etc/aliases.db: No such file or directory

pveversion output on master node:
root@pve-2:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-8-pve)
pve-manager: 8.3.4 (running version: 8.3.4/65224a0f9cd294a3)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-8
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.3-1
proxmox-backup-file-restore: 3.3.3-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.4
pve-cluster: 8.0.10
pve-container: 5.2.4
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-3
pve-ha-manager: 4.0.6
pve-i18n: 3.3.3
pve-qemu-kvm: 9.0.2-5
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.8
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve1
 
I've removed previous HA services, so nothing configured at the moment.

Journal output after CRM command:
The log doesn't show the crm command being received. Did you issue the journal command immediately afterwards? The HA CRM services checks only every 10 seconds for commands.

What does ha-manager status --verbose show? What about systemctl status pve-ha-crm.service pve-ha-lrm.service on both nodes?
 
The log doesn't show the crm command being received. Did you issue the journal command immediately afterwards? The HA CRM services checks only every 10 seconds for commands.

What does ha-manager status --verbose show? What about systemctl status pve-ha-crm.service pve-ha-lrm.service on both nodes?
I've just done the CRM command again and this is the log:
root@pve-2:~# ha-manager crm-command node-maintenance disable pve
root@pve-2:~# journalctl --since "2025-03-01 10:15:00" --until "2025-03-01 10:30:00"
Mar 01 10:15:36 pve-2 pmxcfs[1039]: [status] notice: received log
Mar 01 10:15:37 pve-2 pmxcfs[1039]: [status] notice: received log
Mar 01 10:16:52 pve-2 pvedaemon[37199]: worker exit
Mar 01 10:16:52 pve-2 pvedaemon[1179]: worker 37199 finished
Mar 01 10:16:52 pve-2 pvedaemon[1179]: starting 1 worker(s)
Mar 01 10:16:52 pve-2 pvedaemon[1179]: worker 924576 started
Mar 01 10:17:01 pve-2 CRON[924620]: pam_unix(cron:session): session opened for user >
Mar 01 10:17:01 pve-2 CRON[924621]: (root) CMD (cd / && run-parts --report /etc/cron>
Mar 01 10:17:01 pve-2 CRON[924620]: pam_unix(cron:session): session closed for user >
lines 1-9/9 (END)...skipping...
Mar 01 10:15:36 pve-2 pmxcfs[1039]: [status] notice: received log
Mar 01 10:15:37 pve-2 pmxcfs[1039]: [status] notice: received log
Mar 01 10:16:52 pve-2 pvedaemon[37199]: worker exit
Mar 01 10:16:52 pve-2 pvedaemon[1179]: worker 37199 finished
Mar 01 10:16:52 pve-2 pvedaemon[1179]: starting 1 worker(s)
Mar 01 10:16:52 pve-2 pvedaemon[1179]: worker 924576 started
Mar 01 10:17:01 pve-2 CRON[924620]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Mar 01 10:17:01 pve-2 CRON[924621]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Mar 01 10:17:01 pve-2 CRON[924620]: pam_unix(cron:session): session closed for user root

root@pve-2:~# ha-manager status --verbose
quorum OK
master pve-2 (idle, Sat Mar 1 10:18:26 2025)
lrm pve (maintenance mode, Wed Feb 26 20:45:49 2025)
lrm pve-2 (idle, Sat Mar 1 10:19:04 2025)
full cluster state:
{
"lrm_status" : {
"pve" : {
"mode" : "maintenance",
"results" : {
"Zcz4mjWQEatiPoYB09FB9Q" : {
"exit_code" : 0,
"sid" : "vm:1010",
"state" : "migrate"
}
},
"state" : "wait_for_agent_lock",
"timestamp" : 1740555949
},
"pve-2" : {
"mode" : "active",
"results" : {
"eBZ//lmDjx4lv4v3GU4APQ" : {
"exit_code" : 7,
"sid" : "ct:102",
"state" : "started"
}
},
"state" : "wait_for_agent_lock",
"timestamp" : 1740777544
}
},
"manager_status" : {
"master_node" : "pve-2",
"node_request" : {
"pve" : {},
"pve-2" : {}
},
"node_status" : {
"pve" : "maintenance",
"pve-2" : "online"
},
"service_status" : {},
"timestamp" : 1740777506
},
"quorum" : {
"node" : "pve-2",
"quorate" : "1"
}
}
 
The log doesn't show the crm command being received. Did you issue the journal command immediately afterwards? The HA CRM services checks only every 10 seconds for commands.

What does ha-manager status --verbose show? What about systemctl status pve-ha-crm.service pve-ha-lrm.service on both nodes?

systemctl from master node (pve-2):
root@pve-2:~# systemctl status pve-ha-crm.service pve-ha-lrm.service
● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-02-27 17:01:21 NZDT; 1 day 17h ago
Process: 1183 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
Main PID: 1186 (pve-ha-crm)
Tasks: 1 (limit: 19003)
Memory: 111.6M
CPU: 36.559s
CGroup: /system.slice/pve-ha-crm.service
└─1186 pve-ha-crm

Mar 01 10:01:30 pve-2 pve-ha-crm[1186]: lost lock 'ha_manager_lock - cfs lock update failed - No such file or directory
Mar 01 10:01:35 pve-2 pve-ha-crm[1186]: status change wait_for_quorum => slave
Mar 01 10:03:25 pve-2 pve-ha-crm[1186]: successfully acquired lock 'ha_manager_lock'
Mar 01 10:03:25 pve-2 pve-ha-crm[1186]: watchdog active
Mar 01 10:03:25 pve-2 pve-ha-crm[1186]: status change slave => master
Mar 01 10:18:36 pve-2 pve-ha-crm[1186]: cluster had no service configured for 90 rounds, going idle.
Mar 01 10:18:36 pve-2 pve-ha-crm[1186]: watchdog closed (disabled)
Mar 01 10:18:36 pve-2 pve-ha-crm[1186]: status change master => wait_for_quorum
Mar 01 10:18:41 pve-2 pve-ha-crm[1186]: lost lock 'ha_manager_lock - cfs lock update failed - No such file or directory
Mar 01 10:18:46 pve-2 pve-ha-crm[1186]: status change wait_for_quorum => slave

● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-02-27 17:01:26 NZDT; 1 day 17h ago
Process: 1194 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=0/SUCCESS)
Main PID: 1200 (pve-ha-lrm)
Tasks: 1 (limit: 19003)
Memory: 111.0M
CPU: 44.103s
CGroup: /system.slice/pve-ha-lrm.service
└─1200 pve-ha-lrm

Feb 27 17:04:30 pve-2 pve-ha-lrm[5903]: Task 'UPID:pve-2:00001710:00003A7C:67BFE41C:qmshutdown:1010:root@pam:' still active, waiting
Feb 27 17:04:35 pve-2 pve-ha-lrm[5903]: Task 'UPID:pve-2:00001710:00003A7C:67BFE41C:qmshutdown:1010:root@pam:' still active, waiting
Feb 27 17:04:40 pve-2 pve-ha-lrm[5903]: Task 'UPID:pve-2:00001710:00003A7C:67BFE41C:qmshutdown:1010:root@pam:' still active, waiting
Feb 27 17:04:40 pve-2 pve-ha-lrm[5904]: VM still running - terminating now with SIGTERM
Feb 27 17:04:43 pve-2 pve-ha-lrm[5903]: <root@pam> end task UPID:pve-2:00001710:00003A7C:67BFE41C:qmshutdown:1010:root@pam: OK
Feb 27 17:04:43 pve-2 pve-ha-lrm[5903]: service status vm:1010 stopped
Feb 27 17:45:00 pve-2 pve-ha-lrm[32886]: missing resource configuration for 'ct:102'
Feb 27 17:55:10 pve-2 pve-ha-lrm[1200]: node had no service configured for 60 rounds, going idle.
Feb 27 17:55:10 pve-2 pve-ha-lrm[1200]: watchdog closed (disabled)
Feb 27 17:55:10 pve-2 pve-ha-lrm[1200]: status change active => wait_for_agent_lock

systemctl from the node stuck in maintenance (pve):
root@pve:~# systemctl status pve-ha-crm.service pve-ha-lrm.service
○ pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; disabled; preset: enabled)
Active: inactive (dead)

○ pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; disabled; preset: enabled)
Active: inactive (dead)

Weird? Do I just need to enable it? I've never disabled it before and it had been going well for at least a month.

EDIT: Tried just enabling it and did the CRM command again, no luck, this is the output:
root@pve:~# systemctl enable pve-ha-crm.service pve-ha-lrm.service
Created symlink /etc/systemd/system/multi-user.target.wants/pve-ha-crm.service → /lib/systemd/system/pve-ha-crm.service.
Created symlink /etc/systemd/system/multi-user.target.wants/pve-ha-lrm.service → /lib/systemd/system/pve-ha-lrm.service.
root@pve:~# systemctl status pve-ha-crm.service pve-ha-lrm.service
○ pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; preset: enabled)
Active: inactive (dead)

○ pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; preset: enabled)
Active: inactive (dead)
 
Last edited:
You need to start the systemd services too. Enabling doesn't also start the service right away (it would during next boot). But the No such file or directory error is strange. What do pvecm status and pvecm nodes show?
 
You need to start the systemd services too. Enabling doesn't also start the service right away (it would during next boot). But the No such file or directory error is strange. What do pvecm status and pvecm nodes show?
Starting the service fixed everything, its no longer in maintenance mode!! Thank you so much! Still no idea why that happened from changing nics but for now that doesn't bother me. I'll test the HA function at a better time.

If its of interest:
root@pve:~# pvecm status
Cluster information
-------------------
Name: pve-1-cluster
Config Version: 9
Transport: knet
Secure auth: on

Quorum information
------------------
Date: Tue Mar 4 13:04:13 2025
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.9c
Quorate: Yes

Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice

Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 192.168.86.126 (local)
0x00000002 1 A,V,NMW 192.168.86.125
0x00000000 1 Qdevice

root@pve:~# pvecm nodes

Membership information
----------------------
Nodeid Votes Qdevice Name
1 1 A,V,NMW pve (local)
2 1 A,V,NMW pve-2
0 1 Qdevice