[SOLVED] Cluster Fails after one Day - PVE 6.0.4

cpzengel

Renowned Member
Nov 12, 2015
217
21
83
Aschaffenburg, Germany
zfs.rocks
Hi,

New Installation of PVE6 with tow Nodes
After about one to two days the Cluster Fails.
Last Time I removed pve2 and readded him.

Any Idea to get this running stable?

Code:
Quorum information
------------------
Date:             Sun Jul 21 10:35:40 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/97404
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.50.221 (local)

Code:
Quorum information
------------------
Date:             Sun Jul 21 10:36:06 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2/5140
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:           

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.50.223 (local)

Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 
Can you also please post the "corosync.conf",
Code:
cat /etc/pve/corosync.conf

And some journal/syslog from around the event? To see more specific errors why it actually chokes, from there.
 
sure:)

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.50.221
  }
  node {
    name: pve2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.50.223
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: sysops-rz
  config_version: 6
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}

Code:
Jul 19 16:50:56 pve1 systemd[1]: Starting Corosync Cluster Engine...
Jul 19 16:50:56 pve1 corosync[10141]:   [MAIN  ] Corosync Cluster Engine 3.0.2-dirty starting up
Jul 19 16:50:56 pve1 corosync[10141]:   [MAIN  ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf snmp pie relro bindno
Jul 19 16:50:56 pve1 corosync[10141]:   [TOTEM ] Initializing transport (Kronosnet).
Jul 19 16:50:56 pve1 corosync[10141]:   [TOTEM ] kronosnet crypto initialized: aes256/sha256
Jul 19 16:50:56 pve1 corosync[10141]:   [TOTEM ] totemknet initialized
Jul 19 16:50:56 pve1 corosync[10141]:   [KNET  ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync configuration map access [0]
Jul 19 16:50:57 pve1 corosync[10141]:   [QB    ] server name: cmap
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync configuration service [1]
Jul 19 16:50:57 pve1 corosync[10141]:   [QB    ] server name: cfg
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jul 19 16:50:57 pve1 corosync[10141]:   [QB    ] server name: cpg
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Jul 19 16:50:57 pve1 corosync[10141]:   [WD    ] Watchdog not enabled by configuration
Jul 19 16:50:57 pve1 corosync[10141]:   [WD    ] resource load_15min missing a recovery key.
Jul 19 16:50:57 pve1 corosync[10141]:   [WD    ] resource memory_used missing a recovery key.
Jul 19 16:50:57 pve1 corosync[10141]:   [WD    ] no resources configured.
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync watchdog service [7]
Jul 19 16:50:57 pve1 corosync[10141]:   [QUORUM] Using quorum provider corosync_votequorum
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Jul 19 16:50:57 pve1 corosync[10141]:   [QB    ] server name: votequorum
Jul 19 16:50:57 pve1 corosync[10141]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Jul 19 16:50:57 pve1 corosync[10141]:   [QB    ] server name: quorum
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 0)
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 1 has no active links
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 2 has no active links
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 2 has no active links
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 19 16:50:57 pve1 corosync[10141]:   [KNET  ] host: host: 2 has no active links
Jul 19 16:50:57 pve1 corosync[10141]:   [TOTEM ] A new membership (1:96) was formed. Members joined: 1
Jul 19 16:50:57 pve1 corosync[10141]:   [CPG   ] downlist left_list: 0 received
...skipping...
Jul 22 09:03:05 pve1 corosync[55832]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 22 09:04:46 pve1 corosync[55832]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 09:04:46 pve1 corosync[55832]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 22 09:04:46 pve1 corosync[55832]:   [KNET  ] host: host: 2 has no active links
Jul 22 09:04:46 pve1 corosync[55832]:   [TOTEM ] Token has not been received in 36 ms
Jul 22 09:04:47 pve1 corosync[55832]:   [KNET  ] rx: host: 2 link: 0 is up
Jul 22 09:04:47 pve1 corosync[55832]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 22 09:04:47 pve1 corosync[55832]:   [TOTEM ] A new membership (1:98364) was formed. Members
Jul 22 09:04:47 pve1 corosync[55832]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:04:47 pve1 corosync[55832]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:04:47 pve1 corosync[55832]:   [QUORUM] Members[2]: 1 2
Jul 22 09:04:47 pve1 corosync[55832]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 22 09:16:32 pve1 corosync[55832]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 09:16:32 pve1 corosync[55832]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 22 09:16:32 pve1 corosync[55832]:   [KNET  ] host: host: 2 has no active links
Jul 22 09:16:32 pve1 corosync[55832]:   [TOTEM ] Token has not been received in 36 ms
Jul 22 09:16:32 pve1 corosync[55832]:   [TOTEM ] A processor failed, forming new configuration.
Jul 22 09:16:33 pve1 corosync[55832]:   [KNET  ] rx: host: 2 link: 0 is up
Jul 22 09:16:33 pve1 corosync[55832]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 22 09:16:33 pve1 corosync[55832]:   [TOTEM ] A new membership (1:98368) was formed. Members
Jul 22 09:16:33 pve1 corosync[55832]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:16:33 pve1 corosync[55832]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:16:33 pve1 corosync[55832]:   [QUORUM] Members[2]: 1 2
Jul 22 09:16:33 pve1 corosync[55832]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 22 09:25:28 pve1 corosync[55832]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 09:25:28 pve1 corosync[55832]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 22 09:25:28 pve1 corosync[55832]:   [KNET  ] host: host: 2 has no active links
Jul 22 09:25:29 pve1 corosync[55832]:   [TOTEM ] Token has not been received in 36 ms
Jul 22 09:25:29 pve1 corosync[55832]:   [TOTEM ] A processor failed, forming new configuration.
Jul 22 09:25:29 pve1 corosync[55832]:   [KNET  ] rx: host: 2 link: 0 is up
Jul 22 09:25:29 pve1 corosync[55832]:   [KNET  ] host: host: 2 (passive) best link: 0 (pri: 1)
Jul 22 09:25:29 pve1 corosync[55832]:   [TOTEM ] A new membership (1:98372) was formed. Members
Jul 22 09:25:29 pve1 corosync[55832]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:25:29 pve1 corosync[55832]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:25:29 pve1 corosync[55832]:   [QUORUM] Members[2]: 1 2
Jul 22 09:25:29 pve1 corosync[55832]:   [MAIN  ] Completed service synchronization, ready to provide service.
 
Jul 22 09:25:28 pve1 corosync[55832]: [KNET ] link: host: 2 link: 0 is down Jul 22 09:25:28 pve1 corosync[55832]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1) Jul 22 09:25:28 pve1 corosync[55832]: [KNET ] host: host: 2 has no active links Jul 22 09:25:29 pve1 corosync[55832]: [TOTEM ] Token has not been received in 36 ms Jul 22 09:25:29 pve1 corosync[55832]: [TOTEM ] A processor failed, forming new configuration.

So, did the 192.168.50 network went down, or is there bandwidth heavy traffic on it? But from that log it seems that corosync can catch itself around a second later again...
corosync.conf looks OK..
 
Monitoring is fine. No Complaints

pve1
Code:
Jul 22 09:45:14 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:45:14 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:53:44 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 09:53:45 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:53:45 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:04:46 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:04:47 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:04:47 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:22:52 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:22:54 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:22:54 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:23:39 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:23:40 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:23:40 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:32:43 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:32:44 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:32:44 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:33:37 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:33:38 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:33:38 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:40:36 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:40:37 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:40:37 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:44:18 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:44:19 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:44:19 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:49:47 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 10:49:48 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:49:48 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:03:09 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 11:03:10 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:03:10 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:25 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 11:16:26 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:26 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:35 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 11:16:36 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:36 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:17:37 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 11:17:38 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:17:38 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:34:15 pve1 corosync[52871]:   [KNET  ] link: host: 2 link: 0 is down
Jul 22 11:34:16 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:34:16 pve1 corosync[52871]:   [CPG   ] downlist left_list: 0 received

pve2
Code:
Jul 22 08:56:46 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:03:05 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:03:05 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:04:47 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:04:47 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:07:44 pve2 systemd[63117]: Reached target Shutdown.
Jul 22 09:15:42 pve2 systemd[11882]: Reached target Shutdown.
Jul 22 09:16:33 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:16:33 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:25:29 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:25:29 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:27:47 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:27:47 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:33:52 pve2 corosync[4230]:   [CPG   ] downlist left_list: 1 received
Jul 22 09:39:18 pve2 corosync[4230]:   [KNET  ] link: host: 1 link: 0 is down
Jul 22 09:39:19 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:39:19 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:41:14 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:41:14 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:42:27 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:42:27 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:45:14 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:45:14 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:48:37 pve2 systemd[38977]: Reached target Shutdown.
Jul 22 09:53:45 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 09:53:45 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:04:34 pve2 systemd[47390]: Reached target Shutdown.
Jul 22 10:04:47 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:04:47 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:22:54 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:22:54 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:23:40 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:23:40 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:32:44 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:32:44 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:33:38 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:33:38 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:40:37 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:40:37 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:44:19 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:44:19 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:46:38 pve2 systemd[43068]: Reached target Shutdown.
Jul 22 10:49:48 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 10:49:48 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:02:33 pve2 systemd[37890]: Reached target Shutdown.
Jul 22 11:03:10 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:03:10 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:26 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:26 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:36 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:16:36 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:17:38 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:17:38 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:34:16 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
Jul 22 11:34:16 pve2 corosync[4230]:   [CPG   ] downlist left_list: 0 received
 
could you also provide pve-cluster logs? is HA active (the 'pve2' seems to attempt to shutdown)?
 
"journalctl -u pve-cluster"

did it actually reboot? maybe your cluster keeps falling apart because your second node is failing to shut down?
 
reboot was fine yesterday

pve1
Code:
Jul 21 19:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 21 20:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 21 21:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 21 22:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 21 23:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 00:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 01:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 02:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 03:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 04:04:22 pve1 pmxcfs[8219]: [status] notice: received log
Jul 22 04:04:24 pve1 pmxcfs[8219]: [status] notice: received log
Jul 22 04:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 05:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 06:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 07:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 08:50:54 pve1 pmxcfs[8219]: [dcdb] notice: data verification successful
Jul 22 09:33:54 pve1 pmxcfs[8219]: [dcdb] notice: members: 1/8219
Jul 22 09:33:54 pve1 pmxcfs[8219]: [status] notice: node lost quorum
Jul 22 09:33:54 pve1 pmxcfs[8219]: [status] notice: members: 1/8219
Jul 22 09:39:17 pve1 pmxcfs[8219]: [confdb] crit: cmap_dispatch failed: 2
Jul 22 09:39:17 pve1 pmxcfs[8219]: [status] crit: cpg_dispatch failed: 2
Jul 22 09:39:17 pve1 pmxcfs[8219]: [status] crit: cpg_leave failed: 2
Jul 22 09:39:17 pve1 pmxcfs[8219]: [dcdb] crit: cpg_dispatch failed: 2
Jul 22 09:39:17 pve1 pmxcfs[8219]: [dcdb] crit: cpg_leave failed: 2
Jul 22 09:39:17 pve1 pmxcfs[8219]: [quorum] crit: quorum_dispatch failed: 2
Jul 22 09:39:18 pve1 pmxcfs[8219]: [quorum] crit: quorum_initialize failed: 2
Jul 22 09:39:18 pve1 pmxcfs[8219]: [quorum] crit: can't initialize service
Jul 22 09:39:18 pve1 pmxcfs[8219]: [confdb] crit: cmap_initialize failed: 2
Jul 22 09:39:18 pve1 pmxcfs[8219]: [confdb] crit: can't initialize service
Jul 22 09:39:18 pve1 pmxcfs[8219]: [dcdb] notice: start cluster connection
Jul 22 09:39:18 pve1 pmxcfs[8219]: [dcdb] crit: cpg_initialize failed: 2
Jul 22 09:39:18 pve1 pmxcfs[8219]: [dcdb] crit: can't initialize service
Jul 22 09:39:18 pve1 pmxcfs[8219]: [status] notice: start cluster connection
Jul 22 09:39:18 pve1 pmxcfs[8219]: [status] crit: cpg_initialize failed: 2
Jul 22 09:39:18 pve1 pmxcfs[8219]: [status] crit: can't initialize service
Jul 22 09:39:24 pve1 pmxcfs[8219]: [status] notice: update cluster info (cluster name  sysops-rz, version = 6)
Jul 22 09:39:24 pve1 pmxcfs[8219]: [status] notice: node has quorum
Jul 22 09:39:24 pve1 pmxcfs[8219]: [dcdb] notice: members: 1/8219, 2/3935
Jul 22 09:39:24 pve1 pmxcfs[8219]: [dcdb] notice: starting data syncronisation
Jul 22 09:39:24 pve1 pmxcfs[8219]: [status] notice: members: 1/8219, 2/3935
Jul 22 09:39:24 pve1 pmxcfs[8219]: [status] notice: starting data syncronisation


pve2
Code:
-- Logs begin at Sun 2019-07-21 10:19:27 CEST, end at Mon 2019-07-22 15:08:01 CEST. --
Jul 21 10:19:34 pve2 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jul 21 10:19:34 pve2 pmxcfs[3935]: [quorum] crit: quorum_initialize failed: 2
Jul 21 10:19:34 pve2 pmxcfs[3935]: [quorum] crit: can't initialize service
Jul 21 10:19:34 pve2 pmxcfs[3935]: [confdb] crit: cmap_initialize failed: 2
Jul 21 10:19:34 pve2 pmxcfs[3935]: [confdb] crit: can't initialize service
Jul 21 10:19:34 pve2 pmxcfs[3935]: [dcdb] crit: cpg_initialize failed: 2
Jul 21 10:19:34 pve2 pmxcfs[3935]: [dcdb] crit: can't initialize service
Jul 21 10:19:34 pve2 pmxcfs[3935]: [status] crit: cpg_initialize failed: 2
Jul 21 10:19:34 pve2 pmxcfs[3935]: [status] crit: can't initialize service
Jul 21 10:19:36 pve2 systemd[1]: Started The Proxmox VE cluster filesystem.
Jul 21 10:19:40 pve2 pmxcfs[3935]: [status] notice: update cluster info (cluster name  sysops-rz, version = 6)
Jul 21 10:19:40 pve2 pmxcfs[3935]: [dcdb] notice: members: 2/3935
Jul 21 10:19:40 pve2 pmxcfs[3935]: [dcdb] notice: all data is up to date
Jul 21 10:19:40 pve2 pmxcfs[3935]: [status] notice: members: 2/3935
Jul 21 10:19:40 pve2 pmxcfs[3935]: [status] notice: all data is up to date
Jul 21 10:39:16 pve2 pmxcfs[3935]: [status] notice: node has quorum
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: members: 1/8219, 2/3935
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: starting data syncronisation
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: received sync request (epoch 1/8219/00000009)
Jul 21 10:39:20 pve2 pmxcfs[3935]: [status] notice: members: 1/8219, 2/3935
Jul 21 10:39:20 pve2 pmxcfs[3935]: [status] notice: starting data syncronisation
Jul 21 10:39:20 pve2 pmxcfs[3935]: [status] notice: received sync request (epoch 1/8219/00000005)
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: received all states
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: leader is 2/3935
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: synced members: 2/3935
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: start sending inode updates
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: sent all (2) updates
Jul 21 10:39:20 pve2 pmxcfs[3935]: [dcdb] notice: all data is up to date
Jul 21 10:39:20 pve2 pmxcfs[3935]: [status] notice: received all states
Jul 21 10:39:20 pve2 pmxcfs[3935]: [status] notice: all data is up to date
Jul 21 10:39:28 pve2 pmxcfs[3935]: [status] notice: received log
Jul 21 10:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 11:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 12:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 13:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 14:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 15:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 16:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 17:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 18:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
Jul 21 19:50:54 pve2 pmxcfs[3935]: [dcdb] notice: data verification successful
 
The continuously logs of "Reached target shutdown" are definitively not normal and a host normally does not gets into the state on itself.

Also above logs are not only from different time periods, they're even from different days...

I'd recommend of brining both nodes in a clean state, best thing is probably to just reboot.
Then see if the problem even still exists. If yes, restart corosync and pve-cluster on both nodes and then after the problem comes up again post the logs from both nodes from the same time period, so we can actually see what's going on there.
 
daily disconnect, restart at 22:37 did the job

pve1

Code:
Jul 24 18:28:17 pve1 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jul 24 18:28:17 pve1 pmxcfs[5747]: [quorum] crit: quorum_initialize failed: 2
Jul 24 18:28:17 pve1 pmxcfs[5747]: [quorum] crit: can't initialize service
Jul 24 18:28:17 pve1 pmxcfs[5747]: [confdb] crit: cmap_initialize failed: 2
Jul 24 18:28:17 pve1 pmxcfs[5747]: [confdb] crit: can't initialize service
Jul 24 18:28:17 pve1 pmxcfs[5747]: [dcdb] crit: cpg_initialize failed: 2
Jul 24 18:28:17 pve1 pmxcfs[5747]: [dcdb] crit: can't initialize service
Jul 24 18:28:17 pve1 pmxcfs[5747]: [status] crit: cpg_initialize failed: 2
Jul 24 18:28:17 pve1 pmxcfs[5747]: [status] crit: can't initialize service
Jul 24 18:28:19 pve1 systemd[1]: Started The Proxmox VE cluster filesystem.
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: update cluster info (cluster name  sysops-rz, version = 6)
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: node has quorum
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: members: 1/5747, 2/3612
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: starting data syncronisation
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: received sync request (epoch 1/5747/00000001)
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: members: 1/5747, 2/3612
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: starting data syncronisation
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: received all states
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: leader is 1/5747
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: synced members: 1/5747, 2/3612
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: start sending inode updates
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: sent all (0) updates
Jul 24 18:28:23 pve1 pmxcfs[5747]: [dcdb] notice: all data is up to date
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: received sync request (epoch 1/5747/00000001)
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: received all states
Jul 24 18:28:23 pve1 pmxcfs[5747]: [status] notice: all data is up to date
Jul 24 18:38:10 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 18:38:10 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 18:38:15 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 18:38:15 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 18:38:20 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 18:53:09 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 19:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 24 20:18:04 pve1 pmxcfs[5747]: [status] notice: received log
Jul 24 20:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 24 21:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 24 22:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 24 23:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 25 00:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 25 01:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 25 02:28:17 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
...skipping...
Jul 25 20:13:01 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 20:28:18 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 25 20:35:12 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 21:07:22 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 21:21:01 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 21:28:18 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 25 21:50:41 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 22:28:18 pve1 pmxcfs[5747]: [dcdb] notice: data verification successful
Jul 25 22:29:11 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 22:35:11 pve1 pmxcfs[5747]: [status] notice: cpg_send_message retried 1 times
Jul 25 22:36:11 pve1 pmxcfs[5747]: [confdb] crit: cmap_dispatch failed: 2
Jul 25 22:36:11 pve1 pmxcfs[5747]: [dcdb] crit: cpg_dispatch failed: 2
Jul 25 22:36:11 pve1 pmxcfs[5747]: [dcdb] crit: cpg_leave failed: 2
Jul 25 22:36:11 pve1 pmxcfs[5747]: [status] crit: cpg_dispatch failed: 2
Jul 25 22:36:11 pve1 pmxcfs[5747]: [status] crit: cpg_leave failed: 2
Jul 25 22:36:11 pve1 pmxcfs[5747]: [quorum] crit: quorum_dispatch failed: 2
Jul 25 22:36:12 pve1 pmxcfs[5747]: [quorum] crit: quorum_initialize failed: 2
Jul 25 22:36:12 pve1 pmxcfs[5747]: [quorum] crit: can't initialize service
Jul 25 22:36:12 pve1 pmxcfs[5747]: [confdb] crit: cmap_initialize failed: 2
Jul 25 22:36:12 pve1 pmxcfs[5747]: [confdb] crit: can't initialize service
Jul 25 22:36:12 pve1 pmxcfs[5747]: [dcdb] notice: start cluster connection
Jul 25 22:36:12 pve1 pmxcfs[5747]: [dcdb] crit: cpg_initialize failed: 2
Jul 25 22:36:12 pve1 pmxcfs[5747]: [dcdb] crit: can't initialize service
Jul 25 22:36:12 pve1 pmxcfs[5747]: [status] notice: start cluster connection
Jul 25 22:36:12 pve1 pmxcfs[5747]: [status] crit: cpg_initialize failed: 2
Jul 25 22:36:12 pve1 pmxcfs[5747]: [status] crit: can't initialize service
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: update cluster info (cluster name  sysops-rz, version = 6)
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: node has quorum
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: members: 1/5747, 2/3612
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: starting data syncronisation
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: received sync request (epoch 1/5747/00000005)
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: members: 1/5747, 2/3612
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: starting data syncronisation
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: received sync request (epoch 1/5747/00000003)
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: received all states
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: leader is 1/5747
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: synced members: 1/5747, 2/3612
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: start sending inode updates
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: sent all (0) updates
Jul 25 22:36:18 pve1 pmxcfs[5747]: [dcdb] notice: all data is up to date
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: received all states
Jul 25 22:36:18 pve1 pmxcfs[5747]: [status] notice: all data is up to date

pve2

Code:
Jul 25 12:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 13:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 14:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 15:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 16:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 17:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 17:30:44 pve2 pmxcfs[3612]: [status] notice: cpg_send_message retried 1 times
Jul 25 18:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 18:58:41 pve2 pmxcfs[3612]: [status] notice: received log
Jul 25 19:28:17 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 19:41:41 pve2 pmxcfs[3612]: [dcdb] notice: members: 2/3612
Jul 25 19:41:41 pve2 pmxcfs[3612]: [status] notice: members: 2/3612
Jul 25 19:41:41 pve2 pmxcfs[3612]: [status] notice: node lost quorum
Jul 25 19:41:41 pve2 pmxcfs[3612]: [dcdb] crit: received write while not quorate - trigger resync
Jul 25 19:41:41 pve2 pmxcfs[3612]: [dcdb] crit: leaving CPG group
Jul 25 19:41:41 pve2 pmxcfs[3612]: [dcdb] notice: start cluster connection
Jul 25 19:41:41 pve2 pmxcfs[3612]: [dcdb] crit: cpg_join failed: 14
Jul 25 19:41:41 pve2 pmxcfs[3612]: [dcdb] crit: can't initialize service
Jul 25 19:41:47 pve2 pmxcfs[3612]: [dcdb] notice: members: 2/3612
Jul 25 19:41:47 pve2 pmxcfs[3612]: [dcdb] notice: all data is up to date
Jul 25 19:55:07 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 20:55:07 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 21:55:07 pve2 pmxcfs[3612]: [dcdb] notice: data verification successful
Jul 25 22:36:13 pve2 pmxcfs[3612]: [status] notice: node has quorum
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: members: 1/5747, 2/3612
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: starting data syncronisation
Jul 25 22:36:18 pve2 pmxcfs[3612]: [status] notice: members: 1/5747, 2/3612
Jul 25 22:36:18 pve2 pmxcfs[3612]: [status] notice: starting data syncronisation
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: received sync request (epoch 1/5747/00000005)
Jul 25 22:36:18 pve2 pmxcfs[3612]: [status] notice: received sync request (epoch 1/5747/00000003)
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: received all states
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: leader is 1/5747
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: synced members: 1/5747, 2/3612
Jul 25 22:36:18 pve2 pmxcfs[3612]: [dcdb] notice: all data is up to date
Jul 25 22:36:18 pve2 pmxcfs[3612]: [status] notice: received all states
Jul 25 22:36:18 pve2 pmxcfs[3612]: [status] notice: all data is up to date
Jul 25 22:37:09 pve2 pmxcfs[3612]: [status] notice: received log
 
Jul 25 18:28:01 pve1 systemd[1]: Started Proxmox VE replication runner.
Same time the corosync service failed

Problems started when I set up a ZFS Replication Job.
The Job did not work and i removed it later.
It remained a while, but vanished then
 
I've got the same problem. Same Setup: One Clusternode was upgraded from 5.4 to 6, the other node is new. They are NOT in the same network, but since knet this shouldn't be a problem.

here is the output from proxmox node 2 journalctl -u pve-cluster
Code:
Jul 26 08:37:41 p2 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 2)
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 2)
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] crit: corosync-cfgtool -R failed with exit code 1#010
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] crit: corosync-cfgtool -R failed with exit code 1#010
Jul 26 08:37:41 p2 pmxcfs[929]: [quorum] crit: quorum_initialize failed: 2
Jul 26 08:37:41 p2 pmxcfs[929]: [quorum] crit: can't initialize service
Jul 26 08:37:41 p2 pmxcfs[929]: [confdb] crit: cmap_initialize failed: 2
Jul 26 08:37:41 p2 pmxcfs[929]: [confdb] crit: can't initialize service
Jul 26 08:37:41 p2 pmxcfs[929]: [dcdb] crit: cpg_initialize failed: 2
Jul 26 08:37:41 p2 pmxcfs[929]: [dcdb] crit: can't initialize service
Jul 26 08:37:41 p2 pmxcfs[929]: [status] crit: cpg_initialize failed: 2
Jul 26 08:37:41 p2 pmxcfs[929]: [status] crit: can't initialize service
Jul 26 08:37:43 p2 systemd[1]: Started The Proxmox VE cluster filesystem.
Jul 26 08:37:47 p2 pmxcfs[929]: [status] notice: update cluster info (cluster name  ProxCluster, version = 2)
Jul 26 08:37:47 p2 pmxcfs[929]: [dcdb] notice: members: 2/929
Jul 26 08:37:47 p2 pmxcfs[929]: [dcdb] notice: all data is up to date
Jul 26 08:37:47 p2 pmxcfs[929]: [status] notice: members: 2/929
Jul 26 08:37:47 p2 pmxcfs[929]: [status] notice: all data is up to date

syslog grep: coro

Code:
root@p2:/var/log# cat syslog |grep coro
Jul 26 03:26:35 p2 corosync[1014]:   [KNET  ] pmtud: possible MTU misconfiguration detected. kernel is reporting MTU: 1500 bytes for host 1 link 0 but the other node is not acknowledging packets of this size.
Jul 26 03:26:35 p2 corosync[1014]:   [KNET  ] pmtud: This can be caused by this node interface MTU too big or a network device that does not support or has been misconfigured to manage MTU of this size, or packet loss. knet will continue to run but performances might be affected.
Jul 26 03:26:35 p2 corosync[1014]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 1366 to 1350
Jul 26 03:26:35 p2 corosync[1014]:   [KNET  ] pmtud: Global data MTU changed to: 1350
Jul 26 03:27:05 p2 corosync[1014]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 1350 to 1366
Jul 26 03:27:05 p2 corosync[1014]:   [KNET  ] pmtud: Global data MTU changed to: 1366
Jul 26 06:53:15 p2 corosync[1014]:   [KNET  ] pmtud: possible MTU misconfiguration detected. kernel is reporting MTU: 1500 bytes for host 1 link 0 but the other node is not acknowledging packets of this size.
Jul 26 06:53:15 p2 corosync[1014]:   [KNET  ] pmtud: This can be caused by this node interface MTU too big or a network device that does not support or has been misconfigured to manage MTU of this size, or packet loss. knet will continue to run but performances might be affected.
Jul 26 06:53:15 p2 corosync[1014]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 1366 to 1350
Jul 26 06:53:15 p2 corosync[1014]:   [KNET  ] pmtud: Global data MTU changed to: 1350
Jul 26 06:53:45 p2 corosync[1014]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 1350 to 1366
Jul 26 06:53:45 p2 corosync[1014]:   [KNET  ] pmtud: Global data MTU changed to: 1366
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 2)
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 2)
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] crit: corosync-cfgtool -R failed with exit code 1#010
Jul 26 08:37:41 p2 pmxcfs[890]: [dcdb] crit: corosync-cfgtool -R failed with exit code 1#010
Jul 26 08:37:43 p2 corosync[1015]:   [MAIN  ] Corosync Cluster Engine 3.0.2-dirty starting up
Jul 26 08:37:43 p2 corosync[1015]:   [MAIN  ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf snmp pie relro bindnow
Jul 26 08:37:43 p2 corosync[1015]:   [TOTEM ] Initializing transport (Kronosnet).
Jul 26 08:37:44 p2 corosync[1015]:   [TOTEM ] kronosnet crypto initialized: aes256/sha256
Jul 26 08:37:44 p2 corosync[1015]:   [TOTEM ] totemknet initialized
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync configuration map access [0]
Jul 26 08:37:44 p2 corosync[1015]:   [QB    ] server name: cmap
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync configuration service [1]
Jul 26 08:37:44 p2 corosync[1015]:   [QB    ] server name: cfg
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jul 26 08:37:44 p2 corosync[1015]:   [QB    ] server name: cpg
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Jul 26 08:37:44 p2 corosync[1015]:   [WD    ] Watchdog not enabled by configuration
Jul 26 08:37:44 p2 corosync[1015]:   [WD    ] resource load_15min missing a recovery key.
Jul 26 08:37:44 p2 corosync[1015]:   [WD    ] resource memory_used missing a recovery key.
Jul 26 08:37:44 p2 corosync[1015]:   [WD    ] no resources configured.
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync watchdog service [7]
Jul 26 08:37:44 p2 corosync[1015]:   [QUORUM] Using quorum provider corosync_votequorum
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Jul 26 08:37:44 p2 corosync[1015]:   [QB    ] server name: votequorum
Jul 26 08:37:44 p2 corosync[1015]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Jul 26 08:37:44 p2 corosync[1015]:   [QB    ] server name: quorum
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] host: host: 1 has no active links
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] host: host: 1 has no active links
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:37:44 p2 corosync[1015]:   [KNET  ] host: host: 1 has no active links
Jul 26 08:37:44 p2 corosync[1015]:   [TOTEM ] A new membership (2:5856) was formed. Members joined: 2
Jul 26 08:37:44 p2 corosync[1015]:   [CPG   ] downlist left_list: 0 received
Jul 26 08:37:44 p2 corosync[1015]:   [QUORUM] Members[1]: 2
Jul 26 08:37:44 p2 corosync[1015]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 26 08:37:45 p2 corosync[1015]:   [KNET  ] rx: host: 1 link: 0 is up
Jul 26 08:37:45 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:37:53 p2 corosync[1015]:   [KNET  ] pmtud: PMTUD link change for host: 1 link: 0 from 470 to 1366
Jul 26 08:37:53 p2 corosync[1015]:   [KNET  ] pmtud: Global data MTU changed to: 1366
Jul 26 08:40:18 p2 corosync[1015]:   [KNET  ] link: host: 1 link: 0 is down
Jul 26 08:40:19 p2 corosync[1015]:   [KNET  ] rx: host: 1 link: 0 is up
Jul 26 08:40:19 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:42:12 p2 corosync[1015]:   [KNET  ] link: host: 1 link: 0 is down
Jul 26 08:42:12 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:42:12 p2 corosync[1015]:   [KNET  ] host: host: 1 has no active links
Jul 26 08:42:13 p2 corosync[1015]:   [KNET  ] rx: host: 1 link: 0 is up
Jul 26 08:42:13 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:44:11 p2 corosync[1015]:   [KNET  ] link: host: 1 link: 0 is down
Jul 26 08:44:11 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)
Jul 26 08:44:11 p2 corosync[1015]:   [KNET  ] host: host: 1 has no active links
Jul 26 08:44:13 p2 corosync[1015]:   [KNET  ] rx: host: 1 link: 0 is up
Jul 26 08:44:13 p2 corosync[1015]:   [KNET  ] host: host: 1 (passive) best link: 0 (pri: 1)

The Service first works without problems and after a while it looses connection. Seems like I'm not the only one
 
Code:
corosync-cfgtool -s
Printing link status.
Local node ID 2
LINK ID 0
        addr    = 94.XXX.16.XXX
        status:
                nodeid  1:      link enabled:1  link connected:1
                nodeid  2:      link enabled:1  link connected:1


Code:
root@p2:/var/log# pvecm status
Quorum information
------------------
Date:             Fri Jul 26 08:56:56 2019
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          2/5856
Quorate:          No

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      1
Quorum:           2 Activity blocked
Flags:

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 94.XXX.16.XXX (local)
 
my temporary switch is suspected. i replaced him today
massive load 450 because of zfs replication
pve2 was 40 without reason.
after i resetted (old, but almost new) machines came back and load went back to normal
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!