Single node rebooting/sync issue causes entire Datacenter to malfunction

Superfish1000

Member
Oct 28, 2019
27
5
23
30
I have been having a consistent and incredibly frustrating issue where a single node in my datacenter being rebooted causes the entire datacenter to fail and the web GUI to malfunction and become useless.

I then need to log into the node(s) over SSH and either restart cluster related services or disconnect the affecting node. Baring this, there is no way to correct the issue as the web GUI is completely useless. In this photo you can still see the nodes, but they will not even show up for over a minute when loading the web GUI.
1673663677858.png

Once this happens the only way that I can resolve this is to disconnect the node, Node-0 in this case, or to ssh onto that node and restart the corosync, pve-cluster and pve-storage services.

This same behavior will also happen occasionally when there is a communication disruption between the nodes, and requires the same treatment.

Is there any way I could resolve this without always needing to manually connect to nodes and restart services? It's incredibly irritating that they refuse to ever re-sync.

proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph: 16.2.9-pve1
ceph-fuse: 16.2.9-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-8
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
 
Node-0 corosync[2459]: [MAIN ] Corosync Cluster Engine 3.1.7 starting up
Node-0 corosync[2459]: [MAIN ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf vqsim nozzle snmp pie relro bindnow
Node-0 CRON[2466]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Node-0 corosync[2459]: [TOTEM ] Initializing transport (Kronosnet).
Node-0 kernel: sctp: Hash tables configured (bind 1024/1024)
Node-0 ceph-mds[2456]: starting mds.Node-0 at
Node-0 corosync[2459]: [TOTEM ] totemknet initialized
Node-0 corosync[2459]: [KNET ] pmtud: MTU manually set to: 0
Node-0 corosync[2459]: [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:07.176-0500 7f910502c500 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync configuration map access [0]
Node-0 corosync[2459]: [QB ] server name: cmap
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync configuration service [1]
Node-0 corosync[2459]: [QB ] server name: cfg
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Node-0 corosync[2459]: [QB ] server name: cpg
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync profile loading service [4]
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync resource monitoring service [6]
Node-0 corosync[2459]: [WD ] Watchdog not enabled by configuration
Node-0 corosync[2459]: [WD ] resource load_15min missing a recovery key.
Node-0 corosync[2459]: [WD ] resource memory_used missing a recovery key.
Node-0 corosync[2459]: [WD ] no resources configured.
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync watchdog service [7]
Node-0 corosync[2459]: [QUORUM] Using quorum provider corosync_votequorum
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Node-0 corosync[2459]: [QB ] server name: votequorum
Node-0 corosync[2459]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Node-0 corosync[2459]: [QB ] server name: quorum
Node-0 corosync[2459]: [TOTEM ] Configuring link 0
Node-0 corosync[2459]: [TOTEM ] Configured link number 0: local addr: 192.168.190.6, port=5405
Node-0 corosync[2459]: [TOTEM ] Configuring link 1
Node-0 corosync[2459]: [TOTEM ] Configured link number 1: local addr: 192.168.192.6, port=5406
Node-0 corosync[2459]: [TOTEM ] Configuring link 2
Node-0 corosync[2459]: [TOTEM ] Configured link number 2: local addr: XXX.XXX.XXX.XXX, port=5407
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 0)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 0 because host 6 joined
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 0)
Node-0 systemd[1]: Started Corosync Cluster Engine.
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [QUORUM] Sync joined[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1484ca) was formed. Members joined: 6
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 5 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 has no active links
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:07.264-0500 7f910502c500 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Node-0 systemd[1]: Starting PVE API Daemon...
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:07.352-0500 7f910502c500 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Node-0 pve-firewall[2591]: starting server
Node-0 pvestatd[2592]: starting server
Node-0 systemd[1]: Started Proxmox VE firewall.
Node-0 systemd[1]: Started PVE Status Daemon.
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:07.512-0500 7f910502c500 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Node-0 pvefw-logger[1850]: received terminate request (signal)
Node-0 pvefw-logger[1850]: stopping pvefw logger
Node-0 systemd[1]: Stopping Proxmox VE firewall logger...
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:07.668-0500 7f910502c500 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Node-0 systemd[1]: pvefw-logger.service: Succeeded.
Node-0 systemd[1]: Stopped Proxmox VE firewall logger.
Node-0 systemd[1]: Starting Proxmox VE firewall logger...
Node-0 pvefw-logger[2645]: starting pvefw logger
Node-0 systemd[1]: Started Proxmox VE firewall logger.
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.008-0500 7f910502c500 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.144-0500 7f910502c500 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Node-0 pvedaemon[2648]: starting server
Node-0 pvedaemon[2648]: starting 3 worker(s)
Node-0 pvedaemon[2648]: worker 2649 started
Node-0 pvedaemon[2648]: worker 2650 started
Node-0 pvedaemon[2648]: worker 2651 started
Node-0 systemd[1]: Started PVE API Daemon.
Node-0 systemd[1]: Starting PVE Cluster HA Resource Manager Daemon...
Node-0 systemd[1]: Starting PVE API Proxy Server...
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.220-0500 7f910502c500 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Node-0 kernel: igb 0000:06:00.0 eno1: igb: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Node-0 kernel: vmbr1: port 1(eno1) entered blocking state
Node-0 kernel: vmbr1: port 1(eno1) entered listening state
Node-0 kernel: vmbr0: port 1(eno1.2) entered blocking state
Node-0 kernel: vmbr0: port 1(eno1.2) entered listening state
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.304-0500 7f910502c500 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.380-0500 7f910502c500 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.824-0500 7f910502c500 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Node-0 pve-ha-crm[3163]: starting server
Node-0 pve-ha-crm[3163]: status change startup => wait_for_quorum
Node-0 systemd[1]: Started PVE Cluster HA Resource Manager Daemon.
Node-0 ceph-mgr[2457]: context.c:56: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:08.948-0500 7f910502c500 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Node-0 ceph-mgr[2457]: context.c:56: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:09.420-0500 7f910502c500 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Node-0 pveproxy[3489]: starting server
Node-0 pveproxy[3489]: starting 3 worker(s)
Node-0 pveproxy[3489]: worker 3490 started
Node-0 pveproxy[3489]: worker 3491 started
Node-0 pveproxy[3489]: worker 3492 started
Node-0 systemd[1]: Started PVE API Proxy Server.
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:09.516-0500 7f910502c500 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Node-0 systemd[1]: Starting PVE Local HA Resource Manager Daemon...
Node-0 systemd[1]: Starting PVE SPICE Proxy Server...
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:09.680-0500 7f910502c500 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:09.760-0500 7f910502c500 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
 
Last edited:
Node-0 spiceproxy[3499]: starting server
Node-0 spiceproxy[3499]: starting 1 worker(s)
Node-0 spiceproxy[3499]: worker 3500 started
Node-0 systemd[1]: Started PVE SPICE Proxy Server.
Node-0 ceph-mgr[2457]: context.c:56: warning: mpd_setminalloc: ignoring request to set MPD_MINALLOC a second time
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:09.960-0500 7f910502c500 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:10.104-0500 7f910502c500 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Node-0 corosync[2459]: [KNET ] rx: host: 4 link: 0 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 0 because host 4 joined
Node-0 corosync[2459]: [KNET ] rx: host: 3 link: 0 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 0 because host 3 joined
Node-0 corosync[2459]: [KNET ] rx: host: 2 link: 0 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 0 because host 2 joined
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] rx: host: 1 link: 0 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 0 because host 1 joined
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] rx: host: 4 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 4 joined
Node-0 corosync[2459]: [KNET ] rx: host: 3 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 3 joined
Node-0 corosync[2459]: [KNET ] rx: host: 2 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 2 joined
Node-0 corosync[2459]: [KNET ] rx: host: 1 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 1 joined
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 pve-ha-lrm[3501]: starting server
Node-0 pve-ha-lrm[3501]: status change startup => wait_for_agent_lock
Node-0 systemd[1]: Started PVE Local HA Resource Manager Daemon.
Node-0 systemd[1]: Starting PVE guests...
Node-0 kernel: vmbr0: port 1(eno1.2) entered learning state
Node-0 kernel: vmbr1: port 1(eno1) entered learning state
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:10.308-0500 7f910502c500 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 4 link: 0 from 469 to 65413
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 4 link: 1 from 469 to 2693
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 65413
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 3 link: 1 from 469 to 2693
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 2 link: 0 from 469 to 65413
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 2 link: 1 from 469 to 2693
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 1 link: 0 from 469 to 65413
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 1 link: 1 from 469 to 2693
Node-0 corosync[2459]: [KNET ] pmtud: Global data MTU changed to: 2693
Node-0 ceph-mgr[2457]: 2023-01-13T21:18:10.380-0500 7f910502c500 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Node-0 pve-guests[3503]: <root@pam> starting task UPID:Node-0:00000DB7:00001613:63C210E3:startall::root@pam:
Node-0 pvesh[3503]: waiting for quorum ...
Node-0 pmxcfs[2363]: [status] notice: update cluster info (cluster name Arke-Net, version = 13)
Node-0 kernel: vmbr1: port 1(eno1) entered forwarding state
Node-0 kernel: vmbr1: topology change detected, propagating
Node-0 kernel: vmbr0: port 1(eno1.2) entered forwarding state
Node-0 kernel: vmbr0: topology change detected, propagating
Node-0 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vmbr1: link becomes ready
Node-0 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
Node-0 corosync[2459]: [KNET ] rx: host: 4 link: 2 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 2 because host 4 joined
Node-0 corosync[2459]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] rx: host: 3 link: 2 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 2 because host 3 joined
Node-0 corosync[2459]: [KNET ] rx: host: 2 link: 2 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 2 because host 2 joined
Node-0 corosync[2459]: [KNET ] rx: host: 1 link: 2 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 2 because host 1 joined
Node-0 corosync[2459]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 4 link: 2 from 469 to 1397
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 3 link: 2 from 469 to 1397
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 2 link: 2 from 469 to 1397
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 1 link: 2 from 469 to 1397
Node-0 corosync[2459]: [KNET ] pmtud: Global data MTU changed to: 1397
Node-0 kernel: L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
Node-0 systemd[1]: Mounting /mnt/pve/UranusCEPHFS...
Node-0 kernel: FS-Cache: Loaded
Node-0 kernel: Key type ceph registered
Node-0 kernel: libceph: loaded (mon/osd proto 15/24)
Node-0 kernel: FS-Cache: Netfs 'ceph' registered for caching
Node-0 kernel: ceph: loaded (mds proto 32)
Node-0 kernel: libceph: mon3 (1)192.168.190.4:6789 session established
Node-0 kernel: libceph: client130607963 fsid 9XXXXXXXXXXXXXXXXXXXXXXXXXX3
Node-0 systemd[1]: Mounted /mnt/pve/UranusCEPHFS.
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.14850a) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 pmxcfs[2363]: [dcdb] notice: members: 6/2363
Node-0 pmxcfs[2363]: [dcdb] notice: all data is up to date
Node-0 pmxcfs[2363]: [status] notice: members: 6/2363
Node-0 pmxcfs[2363]: [status] notice: all data is up to date
Node-0 chronyd[2260]: Selected source 69.89.207.199 (2.debian.pool.ntp.org)
Node-0 chronyd[2260]: System clock TAI offset set to 37 seconds
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.14850e) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.14851a) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.148526) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 systemd[1]: Stopping ZeroTier One...
Node-0 systemd[1]: zerotier-one.service: Succeeded.
Node-0 systemd[1]: Stopped ZeroTier One.
Node-0 systemd[1]: Started ZeroTier One.
Node-0 systemd-udevd[5122]: Using default interface naming scheme 'v247'.
Node-0 systemd-udevd[5122]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Node-0 systemd-udevd[5121]: Using default interface naming scheme 'v247'.
Node-0 systemd-udevd[5121]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Node-0 corosync[2459]: [KNET ] link: host: 2 link: 1 is down
Node-0 corosync[2459]: [KNET ] link: host: 1 link: 1 is down
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] rx: host: 2 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 2 joined
Node-0 corosync[2459]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] rx: host: 1 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 1 joined
Node-0 corosync[2459]: [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)
Node-0 corosync[2459]: [KNET ] pmtud: Global data MTU changed to: 1397
Node-0 corosync[2459]: [KNET ] rx: host: 5 link: 1 is up
Node-0 corosync[2459]: [KNET ] link: Resetting MTU for link 1 because host 5 joined
Node-0 corosync[2459]: [KNET ] host: host: 5 (passive) best link: 1 (pri: 1)
Node-0 corosync[2459]: [KNET ] pmtud: PMTUD link change for host: 5 link: 1 from 469 to 2693
Node-0 corosync[2459]: [KNET ] pmtud: Global data MTU changed to: 1397

######################################## Repeated for 281 lines ########################################

Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.148532) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.

######################################## /Repeated for 281 lines ########################################

Node-0 CRON[2466]: pam_unix(cron:session): session closed for user root
Node-0 postfix/pickup[2449]: DECB728CEE: uid=0 from=<root>
Node-0 postfix/cleanup[6330]: DECB728CEE: message-id=<20230114021955.DECB728CEE@Node-0.fakeDomain.TLD>
Node-0 postfix/qmgr[2450]: DECB728CEE: from=<root@Node-0.fakeDomain.TLD>, size=937, nrcpt=1 (queue active)
Node-0 proxmox-mail-forward[6333]: forward mail to <user@fakeEmail.TLD>
Node-0 postfix/pickup[2449]: E7B8228CEF: uid=65534 from=<root>
Node-0 postfix/cleanup[6330]: E7B8228CEF: message-id=<20230114021955.DECB728CEE@Node-0.fakeDomain.TLDt>
Node-0 postfix/local[6332]: DECB728CEE: to=<root@Node-0.fakeDomain.TLD>, orig_to=<root>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
Node-0 postfix/qmgr[2450]: DECB728CEE: removed
Node-0 postfix/qmgr[2450]: E7B8228CEF: from=<root@Node-0.fakeDomain.TLD>, size=1122, nrcpt=1 (queue active)
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1485c2) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1485c6) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1485ca) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1485ce) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 postfix/smtp[6336]: E7B8228CEF: to=<user@fakeEmail.TLD>, relay=gmail-smtp-in.l.google.com[108.177.13.27]:25, delay=5.6, delays=0/0.01/5.1/0.46, dsn=5.7.25, status=bounced (host gmail-smtp-in.l.google.com[108.177.13.27] said: 550-5.7.25 [64.135.100.50] The IP address sending this message does not have a 550-5.7.25 PTR record setup, or the corresponding forward DNS entry does not 550-5.7.25 point to the sending IP. As a policy, Gmail does not accept messages 550-5.7.25 from IPs with missing PTR records. Please visit 550-5.7.25 https://support.google.com/mail/answer/81126#ip-practices for more 550 5.7.25 information. j1-20020a056102000100b003ca3acfcd8bsi6113836vsp.305 - gsmtp (in reply to end of DATA command))
Node-0 postfix/qmgr[2450]: E7B8228CEF: removed
Node-0 postfix/cleanup[6330]: 8ADA628CF1: message-id=<20230114022001.8ADA628CF1@Node-0.fakeDomain.TLD>

######################################## Repeated for 1061 lines ########################################

Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1485d2) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.

######################################## /Repeated for 1061 lines ########################################

Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
Node-0 sshd[11179]: Accepted publickey for root from 192.168.192.1 port 42726 ssh2: RSA SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Node-0 sshd[11179]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Node-0 systemd[1]: Created slice User Slice of UID 0.
Node-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Node-0 systemd-logind[1885]: New session 2 of user root.
Node-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Node-0 systemd[1]: Starting User Manager for UID 0...
Node-0 systemd[11182]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Node-0 corosync[2459]: [QUORUM] Sync members[1]: 6
Node-0 corosync[2459]: [TOTEM ] A new membership (6.1487ea) was formed. Members
Node-0 corosync[2459]: [QUORUM] Members[1]: 6
Node-0 corosync[2459]: [MAIN ] Completed service synchronization, ready to provide service.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!