lrm node1 (old timestamp - dead?)

MoreDakka

Active Member
May 2, 2019
58
13
28
45
We had an issue where this node was forcibly powered down. Since then HA doesn't like it and it has that error on it.
I wanted to reboot the node but it has a VM on it right now that is stuck in Migrate.
I can power down the VM and reboot the node but I've been trying to find a way to get this fixed (in case this happens when it's a production cluster).
What info do I need to post to help with troubleshooting this or what steps can I take to resolve this?

Thanks!
 
Hi,

can you post the output of the following commands executed on the problematic node here in [code] ... [/code] tags:
systemctl status -n 50 pve-ha-lrm.service pve-ha-crm.service pve-cluster.service
ha-manager status
 
Yeppers!

The

Code:
root@pve1-cpu1:~# systemctl status -n 50 pve-ha-lrm.service pve-ha-crm.service pve-cluster.service
● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
     Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2023-02-08 15:38:25 MST; 2 weeks 4 days ago
        CPU: 974ms

Feb 08 15:38:24 pve1-cpu1 systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[1] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[2] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[1] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[3] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[2] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: Unable to load access control list: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[3] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 systemd[1]: pve-ha-lrm.service: Control process exited, code=exited, status=111/n/a
Feb 08 15:38:25 pve1-cpu1 systemd[1]: pve-ha-lrm.service: Failed with result 'exit-code'.
Feb 08 15:38:25 pve1-cpu1 systemd[1]: Failed to start PVE Local HA Resource Manager Daemon.

● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon
     Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-02-23 16:09:33 MST; 3 days ago
    Process: 3926452 ExecStart=/usr/sbin/pve-ha-crm start (code=exited, status=0/SUCCESS)
   Main PID: 3926458 (pve-ha-crm)
      Tasks: 1 (limit: 154522)
     Memory: 95.1M
        CPU: 2min 359ms
     CGroup: /system.slice/pve-ha-crm.service
             └─3926458 pve-ha-crm

Feb 23 16:09:32 pve1-cpu1 systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Feb 23 16:09:33 pve1-cpu1 pve-ha-crm[3926458]: starting server
Feb 23 16:09:33 pve1-cpu1 pve-ha-crm[3926458]: status change startup => wait_for_quorum
Feb 23 16:09:33 pve1-cpu1 systemd[1]: Started PVE Cluster HA Resource Manager Daemon.
Feb 23 16:09:38 pve1-cpu1 pve-ha-crm[3926458]: status change wait_for_quorum => slave

● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2023-02-08 15:38:41 MST; 2 weeks 4 days ago
   Main PID: 2233 (pmxcfs)
      Tasks: 8 (limit: 154522)
     Memory: 50.0M
        CPU: 54min 48.144s
     CGroup: /system.slice/pve-cluster.service
             └─2233 /usr/bin/pmxcfs

Feb 27 08:45:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 08:50:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 08:50:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 08:50:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 08:55:03 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 08:55:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 08:55:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:00:00 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:00:00 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:00:01 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:00:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:00:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:00:07 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:05:03 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:05:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:05:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:10:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:10:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:10:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:15:03 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:15:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:15:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:20:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:20:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:20:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:25:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:25:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:25:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:30:00 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:30:00 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:30:01 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:30:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:30:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:30:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:35:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:35:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:35:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:38:40 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: data verification successful
Feb 27 09:40:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:40:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:40:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:45:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:45:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:45:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:50:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:50:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:50:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:55:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:55:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 27 09:55:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
lines 17-94/94 (END)

Code:
root@pve1-cpu1:~# ha-manager status
quorum OK
master pve1-cpu3 (active, Mon Feb 27 09:56:43 2023)
lrm pve1-cpu1 (old timestamp - dead?, Wed Feb  8 14:13:58 2023)
lrm pve1-cpu2 (active, Mon Feb 27 09:56:39 2023)
lrm pve1-cpu3 (active, Mon Feb 27 09:56:42 2023)
lrm pve1-cpu4 (active, Mon Feb 27 09:56:36 2023)
service ct:102 (pve1-cpu3, started)
service vm:100 (pve1-cpu4, disabled)
service vm:101 (pve1-cpu1, migrate)
service vm:104 (pve1-cpu2, started)
service vm:105 (pve1-cpu2, started)
service vm:107 (pve1-cpu3, started)
service vm:108 (pve1-cpu3, started)
service vm:110 (pve1-cpu2, disabled)
service vm:300 (pve1-cpu2, started)
root@pve1-cpu1:~#

I haven't tried to restart the pve-ha-lrm.service yet, actually didn't know the service that controlled HA until right now ;) Should I try that first?
 
Code:
root@pve1-cpu1:~# systemctl status -n 50 pve-ha-lrm.service pve-ha-crm.service pve-cluster.service
● pve-ha-lrm.service - PVE Local HA Resource Manager Daemon
     Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2023-02-08 15:38:25 MST; 2 weeks 4 days ago
        CPU: 974ms

Feb 08 15:38:24 pve1-cpu1 systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[1] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[2] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[1] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[3] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[2] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: Unable to load access control list: Connection refused
Feb 08 15:38:25 pve1-cpu1 pve-ha-lrm[2111]: ipcc_send_rec[3] failed: Connection refused
Feb 08 15:38:25 pve1-cpu1 systemd[1]: pve-ha-lrm.service: Control process exited, code=exited, status=111/n/a
Feb 08 15:38:25 pve1-cpu1 systemd[1]: pve-ha-lrm.service: Failed with result 'exit-code'.
Feb 08 15:38:25 pve1-cpu1 systemd[1]: Failed to start PVE Local HA Resource Manager Daemon.
It seems the LRM failed to start up multiple times due to the pmxcfs (pve-cluster.service) not being available, after some retries it then went into failed state.

To do a better analysis of why that happened one would need to check out the logs from around the time the failure happened, i.e., at Wed 2023-02-08 15:38:25 MST plus/minus some hour or so.

Anyhow, as the CRM runs and pmxcfs seems to be fine currently I'd think it could be worth a try to simply restart the LRM service.
Bash:
systemctl reset-failed pve-ha-lrm.service
systemctl start pve-ha-lrm.service
 
I restarted the LRM service and the Migrations started flowing again no problem.
Would those logs mailly be under /var/log/syslog ?

Thanks,
 
So I restarted the LRM service, HA came on line and migrated no problem. Did systems updates, rebooted the node and now it's dead again. Based on these logs, any idea?

Code:
Feb 28 11:36:44 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:36:50 pve1-cpu1 pmxcfs[1740]: [main] crit: Unable to get local IP address
Feb 28 11:36:50 pve1-cpu1 pmxcfs[1740]: [main] crit: Unable to get local IP address
Feb 28 11:36:50 pve1-cpu1 systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Feb 28 11:36:50 pve1-cpu1 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Feb 28 11:36:50 pve1-cpu1 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Feb 28 11:36:50 pve1-cpu1 systemd[1]: Starting Corosync Cluster Engine...
Feb 28 11:36:50 pve1-cpu1 corosync[1891]:   [MAIN  ] Corosync Cluster Engine 3.1.7 starting up
Feb 28 11:36:50 pve1-cpu1 corosync[1891]:   [MAIN  ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf vqsim nozzle snmp pie relro bindnow
Feb 28 11:36:50 pve1-cpu1 corosync[1891]:   [TOTEM ] Initializing transport (Kronosnet).
Feb 28 11:36:51 pve1-cpu1 systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 1.
Feb 28 11:36:51 pve1-cpu1 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [TOTEM ] totemknet initialized
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: MTU manually set to: 0
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so
Feb 28 11:36:51 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync configuration map access [0]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QB    ] server name: cmap
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync configuration service [1]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QB    ] server name: cfg
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QB    ] server name: cpg
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [WD    ] Watchdog not enabled by configuration
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [WD    ] resource load_15min missing a recovery key.
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [WD    ] resource memory_used missing a recovery key.
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [WD    ] no resources configured.
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync watchdog service [7]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QUORUM] Using quorum provider corosync_votequorum
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QB    ] server name: votequorum
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QB    ] server name: quorum
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [TOTEM ] Configuring link 0
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [TOTEM ] Configured link number 0: local addr: 10.10.1.81, port=5405
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [TOTEM ] Configuring link 1
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [TOTEM ] Configured link number 1: local addr: 10.10.2.81, port=5406
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 0 because host 1 joined
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 0)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QUORUM] Sync members[1]: 1
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QUORUM] Sync joined[1]: 1
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [TOTEM ] A new membership (1.2bec6) was formed. Members joined: 1
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 has no active links
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [QUORUM] Members[1]: 1
Feb 28 11:36:51 pve1-cpu1 corosync[1891]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 28 11:36:51 pve1-cpu1 systemd[1]: Started Corosync Cluster Engine.
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 3 link: 1 is up
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 1 because host 3 joined
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 4 link: 1 is up
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 1 because host 4 joined
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 2 link: 1 is up
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 1 because host 2 joined
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 1 (pri: 1)
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 1 (pri: 1)
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 1 (pri: 1)
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: PMTUD link change for host: 4 link: 1 from 469 to 1397
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: PMTUD link change for host: 3 link: 1 from 469 to 1397
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: PMTUD link change for host: 2 link: 1 from 469 to 1397
Feb 28 11:36:53 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Feb 28 11:36:54 pve1-cpu1 corosync[1891]:   [QUORUM] Sync members[4]: 1 2 3 4
Feb 28 11:36:54 pve1-cpu1 corosync[1891]:   [QUORUM] Sync joined[3]: 2 3 4
Feb 28 11:36:54 pve1-cpu1 corosync[1891]:   [TOTEM ] A new membership (1.2beca) was formed. Members joined: 2 3 4
Feb 28 11:36:54 pve1-cpu1 corosync[1891]:   [QUORUM] This node is within the primary component and will provide service.
Feb 28 11:36:54 pve1-cpu1 corosync[1891]:   [QUORUM] Members[4]: 1 2 3 4
Feb 28 11:36:54 pve1-cpu1 corosync[1891]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 28 11:36:56 pve1-cpu1 pmxcfs[1956]: [main] crit: Unable to get local IP address
Feb 28 11:36:56 pve1-cpu1 pmxcfs[1956]: [main] crit: Unable to get local IP address
Feb 28 11:36:56 pve1-cpu1 systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Feb 28 11:36:56 pve1-cpu1 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Feb 28 11:36:56 pve1-cpu1 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Feb 28 11:36:57 pve1-cpu1 systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 2.
Feb 28 11:36:57 pve1-cpu1 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 28 11:36:57 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:36:57 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 3 link: 0 is up
Feb 28 11:36:57 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 0 because host 3 joined
Feb 28 11:36:57 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
CONTINUED IN THE NEXT POST

It mentions IP addresses but all the interfaces are up with IPs and all nodes are pingable.
When the server was rebooted, I had to restart pveproxy and pvestatd to get it showing in the interface from the other nodes.

Thanks for your help!
 
Code:
Feb 28 11:36:57 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1397
Feb 28 11:36:57 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Feb 28 11:36:58 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 4 link: 0 is up
Feb 28 11:36:58 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 0 because host 4 joined
Feb 28 11:36:58 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Feb 28 11:36:58 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: PMTUD link change for host: 4 link: 0 from 469 to 1397
Feb 28 11:36:58 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Feb 28 11:37:00 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 2 link: 0 is up
Feb 28 11:37:00 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 0 because host 2 joined
Feb 28 11:37:00 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:37:01 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: PMTUD link change for host: 2 link: 0 from 469 to 1397
Feb 28 11:37:01 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Feb 28 11:37:03 pve1-cpu1 pmxcfs[2026]: [main] crit: Unable to get local IP address
Feb 28 11:37:03 pve1-cpu1 pmxcfs[2026]: [main] crit: Unable to get local IP address
Feb 28 11:37:03 pve1-cpu1 systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Feb 28 11:37:03 pve1-cpu1 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Feb 28 11:37:03 pve1-cpu1 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Feb 28 11:37:03 pve1-cpu1 systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 3.
Feb 28 11:37:03 pve1-cpu1 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 28 11:37:03 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:37:09 pve1-cpu1 pmxcfs[2069]: [main] crit: Unable to get local IP address
Feb 28 11:37:09 pve1-cpu1 pmxcfs[2069]: [main] crit: Unable to get local IP address
Feb 28 11:37:09 pve1-cpu1 systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Feb 28 11:37:09 pve1-cpu1 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Feb 28 11:37:09 pve1-cpu1 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Feb 28 11:37:09 pve1-cpu1 systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 4.
Feb 28 11:37:09 pve1-cpu1 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 28 11:37:09 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:37:15 pve1-cpu1 pmxcfs[2112]: [main] crit: Unable to get local IP address
Feb 28 11:37:15 pve1-cpu1 pmxcfs[2112]: [main] crit: Unable to get local IP address
Feb 28 11:37:15 pve1-cpu1 systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Feb 28 11:37:15 pve1-cpu1 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Feb 28 11:37:15 pve1-cpu1 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Feb 28 11:37:15 pve1-cpu1 systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5.
Feb 28 11:37:15 pve1-cpu1 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 28 11:37:15 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:37:21 pve1-cpu1 pmxcfs[2156]: [main] crit: Unable to get local IP address
Feb 28 11:37:21 pve1-cpu1 pmxcfs[2156]: [main] crit: Unable to get local IP address
Feb 28 11:37:21 pve1-cpu1 systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTION
Feb 28 11:37:21 pve1-cpu1 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Feb 28 11:37:21 pve1-cpu1 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Feb 28 11:37:22 pve1-cpu1 systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 6.
Feb 28 11:37:22 pve1-cpu1 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 28 11:37:22 pve1-cpu1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: update cluster info (cluster name  4web-pve1, version = 6)
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: node has quorum
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: members: 1/2233, 2/1671, 3/1666, 4/1756
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: starting data syncronisation
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: members: 1/2233, 2/1671, 3/1666, 4/1756
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: starting data syncronisation
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: received sync request (epoch 1/2233/00000001)
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: received sync request (epoch 1/2233/00000001)
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: received all states
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: leader is 2/1671
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: synced members: 2/1671, 3/1666, 4/1756
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: waiting for updates from leader
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: received all states
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: all data is up to date
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [status] notice: dfsm_deliver_queue: queue length 2
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: update complete - trying to commit (got 10 inode updates)
Feb 28 11:37:26 pve1-cpu1 pmxcfs[2233]: [dcdb] notice: all data is up to date
Feb 28 11:37:27 pve1-cpu1 systemd[1]: Started The Proxmox VE cluster filesystem.
Feb 28 11:40:04 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 28 11:40:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 28 11:40:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 28 11:41:38 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 28 11:44:04 pve1-cpu1 corosync[1891]:   [KNET  ] link: host: 2 link: 0 is down
Feb 28 11:44:04 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 1 (pri: 1)
Feb 28 11:44:08 pve1-cpu1 corosync[1891]:   [KNET  ] rx: host: 2 link: 0 is up
Feb 28 11:44:08 pve1-cpu1 corosync[1891]:   [KNET  ] link: Resetting MTU for link 0 because host 2 joined
Feb 28 11:44:08 pve1-cpu1 corosync[1891]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Feb 28 11:44:08 pve1-cpu1 corosync[1891]:   [KNET  ] pmtud: Global data MTU changed to: 1397
Feb 28 11:44:50 pve1-cpu1 corosync[1891]:   [TOTEM ] Token has not been received in 3225 ms
Feb 28 11:44:51 pve1-cpu1 corosync[1891]:   [TOTEM ] A processor failed, forming new configuration: token timed out (4300ms), waiting 5160ms for consensus.
Feb 28 11:44:51 pve1-cpu1 corosync[1891]:   [QUORUM] Sync members[4]: 1 2 3 4
Feb 28 11:44:51 pve1-cpu1 corosync[1891]:   [TOTEM ] A new membership (1.2bece) was formed. Members
Feb 28 11:44:51 pve1-cpu1 corosync[1891]:   [QUORUM] Members[4]: 1 2 3 4
Feb 28 11:44:51 pve1-cpu1 corosync[1891]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 28 11:45:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 28 11:45:05 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
Feb 28 11:45:06 pve1-cpu1 pmxcfs[2233]: [status] notice: received log
 
I see 'Feb 28 11:37:09 pve1-cpu1 pmxcfs[2069]: [main] crit: Unable to get local IP address'

Is your /etc/hosts in order?
 
  • Like
Reactions: t.lamprecht
Boooo, I thought I fixed that. We have a WHMCS plugin for Proxmox that for some reason wants to put the VM;s hostname that the client puts in the panel into the /etc/hosts file and it fricks things up on reboot...
Need to talk to their support about that...

Thanks!
 
  • Like
Reactions: ModulesGarden

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!