[SOLVED] PMX Cluster im WebUI nicht OK, funktioniert aber normal

Ingo S

Renowned Member
Oct 16, 2016
348
42
93
41
Hallo Experten

Wir haben heute Nachmittag spontan und ohne erkennbaren Auslöser ein merkwürdiges Phänomen. Das Web-UI zeigt nur den jeweils aktuellen Host über den man das Web-UI aufgerufen hat als Online, alle anderen Hosts sind mit Fragezeichen versehen.

Zuerst dachte ich an ein Problem mit Corosync, oder evtl mit dem Corosync Netz oder mit Multicast.
Deshalb habe ich zunächst von allen Hosts alle anderen Hosts über das Corosync Netz angepingt. Das war erfolgreich.
Dann habe ich mit omping alle Hosts simultan per Multicast gepingt. Auch das war völlig Problemlos. 10.000 Pakete gesendet, 9.992 OK, was völlig im Rahmen ist.

Ich dachte, okay, vielleicht stimmt was mit dem Corosync Dienst nicht. Habe daher mal auf dem Host auf dem nur unwichtige Gäste laufen, den Corosync angehalten und wieder gestartet. Keine Veränderung

Was mich total irritiert ist die Ausgabe von pvecm status
root@vm-2:~# pvecm status
Quorum information
------------------
Date: Thu Mar 7 17:07:03 2019
Quorum provider: corosync_votequorum
Nodes: 7
Node ID: 0x00000002
Ring ID: 1/475104
Quorate: Yes

Votequorum information
----------------------
Expected votes: 8
Highest expected: 8
Total votes: 7
Quorum: 5
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.16.1
0x00000002 1 192.168.16.2 (local)
0x00000003 1 192.168.16.3
0x00000004 1 192.168.16.4
0x00000005 1 192.168.16.5
0x00000006 1 192.168.16.6
0x00000008 1 192.168.16.8
Ein Host ist Down, aber der hat einen Hardwareschaden und wird bald repariert. Ansonsten sieht der Cluster total OK aus. Das WebUI sagt unter Datacenter -> Summary ebenfalls, das alles OK ist. Ich kann aber keine Gäste migrieren, ich bekomme von der WebUI des einen Hosts keine Infos über die Gäste die auf anderen Hosts laufen und alle Hosts außer der über den ich das WebUI aufgerufen habe werden mit ? markiert. VM-Namen werden auch nicht angezeigt.
root@vm-2:~# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: vm-1
nodeid: 1
quorum_votes: 1
ring0_addr: 192.168.16.1
}
node {
name: vm-2
nodeid: 2
quorum_votes: 1
ring0_addr: 192.168.16.2
}
node {
name: vm-3
nodeid: 3
quorum_votes: 1
ring0_addr: 192.168.16.3
}
node {
name: vm-4
nodeid: 4
quorum_votes: 1
ring0_addr: 192.168.16.4
}
node {
name: vm-5
nodeid: 5
quorum_votes: 1
ring0_addr: 192.168.16.5
}
node {
name: vm-6
nodeid: 6
quorum_votes: 1
ring0_addr: 192.168.16.6
}
node {
name: vm-7
nodeid: 7
quorum_votes: 1
ring0_addr: 192.168.16.7
}
node {
name: vm-8
nodeid: 8
quorum_votes: 1
ring0_addr: 192.168.16.8
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: Langeoog
config_version: 11
interface {
bindnetaddr: 192.168.16.1
ringnumber: 0
}
ip_version: ipv4
secauth: on
version: 2
}
root@vm-2:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-01-03 09:39:18 CET; 2 months 2 days ago
Main PID: 2545 (pmxcfs)
Tasks: 13 (limit: 7372)
Memory: 116.0M
CPU: 2h 51min 8.710s
CGroup: /system.slice/pve-cluster.service
└─2545 /usr/bin/pmxcfs

Mar 07 16:28:44 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2274/00000004)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2274/00000005)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2274/00000006)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2274/00000007)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000002)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000003)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000004)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000005)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000006)
Mar 07 16:28:44 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000007)
root@vm-2:~# systemctl status pvestatd.service
● pvestatd.service - PVE Status Daemon
Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-01-03 09:39:19 CET; 2 months 2 days ago
Main PID: 3353 (pvestatd)
Tasks: 1 (limit: 7372)
Memory: 82.9M
CPU: 1d 4h 8min 453ms
CGroup: /system.slice/pvestatd.service
└─3353 pvestatd

Mar 07 15:09:48 vm-2 pvestatd[3353]: got timeout
Mar 07 15:09:48 vm-2 pvestatd[3353]: status update time (5.179 seconds)
Mar 07 15:34:28 vm-2 pvestatd[3353]: got timeout
Mar 07 15:34:28 vm-2 pvestatd[3353]: status update time (5.168 seconds)
Mar 07 15:34:38 vm-2 pvestatd[3353]: got timeout
Mar 07 15:34:38 vm-2 pvestatd[3353]: status update time (5.188 seconds)
Mar 07 15:34:48 vm-2 pvestatd[3353]: got timeout
Mar 07 15:34:48 vm-2 pvestatd[3353]: status update time (5.182 seconds)
Mar 07 15:34:58 vm-2 pvestatd[3353]: got timeout
Mar 07 15:34:58 vm-2 pvestatd[3353]: status update time (5.178 seconds)

Der pvestatd sieht verdächtig aus. Ich traue mich aber nicht, den jetzt neu zu starten. Auf dem Host laufen VMs die wir für den Betrieb brauchen und ich kann die VMs gerade ja nicht migrieren.

Zu dem Zeitpunkt als der Fehler aufgetreten ist sind keinerlei Veränderungen am Cluster vorgenommen worden. Alle VMs sind normal erreichbar, Ceph läuft problemlos. Deshalb lasse ich den Cluster erstmal einfach so laufen. Ich hoffe das lässt sich ohne Down-Time lösen...

Hat jemand eine Idee? Welche Infos braucht ihr zur Fehlersuche noch?
Ich hätte noch das Syslog, aber heute Nachmittag gab es ein Turnover, da müsste ich erstmal gucken wo was interessantes drin steht und wie man da ran kommt. Bin nicht so der Profi was journalctl angeht.
 
Sind alle storages online? Sowas kann passieren wenn ein NFS storage hängt. Einfach mit

# pvesm status

überprüfen.
 
Habe die Storages direkt mal geprüft. Da ist alles in Ordnung.
root@vm-2:~# pvesm status
Name Type Status Total Used Available %
Backup-Daily nfs active 38888193024 22044027904 16844148736 56.69%
Backup-Weekly nfs active 38888193024 22044027904 16844148736 56.69%
CD-Images nfs active 38888193024 22044027904 16844148736 56.69%
HDD_Storage-VM rbd active 38443833435 16934592603 21509240832 44.05%
Test nfs active 38888193024 22044027904 16844148736 56.69%
local dir active 15158232 5781436 8587088 38.14%
local-lvm lvmthin active 31326208 0 31326208 0.00%

Ich habe der Neugier halber mal pvestatd status ausgeführt. Dabei bleibt der Befehl auf der Konsole hängen, ich bekomme auch nach mehreren Minuten keine Rückmeldung.
Der Kernel zeigt folgende Meldungen:
[7868784.777877] INFO: task pvesr:2069249 blocked for more than 120 seconds.
[7868784.777892] Tainted: P O 4.15.18-9-pve #1
[7868784.777904] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[7868784.777935] pvesr D 0 2069249 1 0x00000000
[7868784.777936] Call Trace:
[7868784.777938] __schedule+0x3e0/0x870
[7868784.777939] ? path_parentat+0x3e/0x80
[7868784.777940] schedule+0x36/0x80
[7868784.777942] rwsem_down_write_failed+0x208/0x390
[7868784.777943] call_rwsem_down_write_failed+0x17/0x30
[7868784.777944] ? call_rwsem_down_write_failed+0x17/0x30
[7868784.777946] down_write+0x2d/0x40
[7868784.777947] filename_create+0x7e/0x160
[7868784.777948] SyS_mkdir+0x51/0x100
[7868784.777949] do_syscall_64+0x73/0x130
[7868784.777951] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[7868784.777952] RIP: 0033:0x7fcc6f1c1447
[7868784.777952] RSP: 002b:00007ffda502bd68 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
[7868784.777953] RAX: ffffffffffffffda RBX: 0000562881da0010 RCX: 00007fcc6f1c1447
[7868784.777954] RDX: 00005628817b2e84 RSI: 00000000000001ff RDI: 0000562885a0be60
[7868784.777954] RBP: 0000000000000000 R08: 0000000000000200 R09: 0000000000000030
[7868784.777955] R10: 0000000000000000 R11: 0000000000000246 R12: 00005628833b6c48
[7868784.777956] R13: 00005628854fc9b0 R14: 0000562885a0be60 R15: 00000000000001ff
[7868905.612983] INFO: task pveproxy worker:1928595 blocked for more than 120 seconds.
[7868905.613005] Tainted: P O 4.15.18-9-pve #1
[7868905.613017] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[7868905.613034] pveproxy worker D 0 1928595 1979 0x00000000
[7868905.613035] Call Trace:
[7868905.613039] __schedule+0x3e0/0x870
[7868905.613041] ? get_acl+0x7c/0x100
[7868905.613042] schedule+0x36/0x80
[7868905.613043] rwsem_down_read_failed+0x10a/0x170
[7868905.613044] ? _cond_resched+0x1a/0x50
[7868905.613046] call_rwsem_down_read_failed+0x18/0x30
[7868905.613047] ? call_rwsem_down_read_failed+0x18/0x30
[7868905.613048] down_read+0x20/0x40
[7868905.613050] path_openat+0x897/0x14a0
[7868905.613051] do_filp_open+0x99/0x110
[7868905.613053] ? __check_object_size+0xb3/0x190
[7868905.613054] ? __alloc_fd+0x46/0x170
[7868905.613056] do_sys_open+0x135/0x280
[7868905.613057] ? do_sys_open+0x135/0x280
[7868905.613058] SyS_open+0x1e/0x20
[7868905.613060] do_syscall_64+0x73/0x130
[7868905.613061] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[7868905.613062] RIP: 0033:0x7f9021de4820
[7868905.613063] RSP: 002b:00007ffdaf540db8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
[7868905.613064] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f9021de4820
[7868905.613065] RDX: 00000000000001b6 RSI: 0000000000000000 RDI: 000055ea6a63e680
[7868905.613065] RBP: 0000000000000000 R08: 00007ffdaf540fe0 R09: 000055ea6a63e680
[7868905.613066] R10: 000055ea628ae4e0 R11: 0000000000000246 R12: 0000000000000000
[7868905.613067] R13: 000055ea64013010 R14: 00007ffdaf540fe0 R15: 000055ea6402d3f0
[7868905.613069] INFO: task pvesr:2069249 blocked for more than 120 seconds.
[7868905.613083] Tainted: P O 4.15.18-9-pve #1
[7868905.613096] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[7868905.613112] pvesr D 0 2069249 1 0x00000000
[7868905.613113] Call Trace:
[7868905.613114] __schedule+0x3e0/0x870
[7868905.613115] ? path_parentat+0x3e/0x80
[7868905.613116] schedule+0x36/0x80
[7868905.613118] rwsem_down_write_failed+0x208/0x390
[7868905.613119] call_rwsem_down_write_failed+0x17/0x30
[7868905.613120] ? call_rwsem_down_write_failed+0x17/0x30
[7868905.613122] down_write+0x2d/0x40
[7868905.613123] filename_create+0x7e/0x160
[7868905.613124] SyS_mkdir+0x51/0x100
[7868905.613125] do_syscall_64+0x73/0x130
[7868905.613126] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[7868905.613127] RIP: 0033:0x7fcc6f1c1447
[7868905.613128] RSP: 002b:00007ffda502bd68 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
[7868905.613129] RAX: ffffffffffffffda RBX: 0000562881da0010 RCX: 00007fcc6f1c1447
[7868905.613129] RDX: 00005628817b2e84 RSI: 00000000000001ff RDI: 0000562885a0be60
[7868905.613130] RBP: 0000000000000000 R08: 0000000000000200 R09: 0000000000000030
[7868905.613130] R10: 0000000000000000 R11: 0000000000000246 R12: 00005628833b6c48
[7868905.613131] R13: 00005628854fc9b0 R14: 0000562885a0be60 R15: 00000000000001ff
Das sieht auf allen Servern gleich aus. Auch auf dem, den ich neu gestartet hatte erhalte ich direkt beim Start diese Meldungen im Kernel Log
 
Wie schauts mit dem Filesystem/Disken vom host selber aus?

Wird storage replication benützt? Weil im dmesg hängt "pvesr" (Proxmox VE Storage Replication).
Wird ZFS benützt?

Weil die kernel errors deuten definitiv auf ein problem mit einen storage hin, welcher genau ist etwas schwer zu sagen.
 
Noch ein kurzer Nachtrag:
Ich habe gerade versucht eine VM zu starten, das hat leider nicht geklappt.
Code:
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/QemuServer.pm line 2026.
TASK ERROR: start failed: command '/usr/bin/kvm -id 1000 -name PC-i.schmidt -chardev 'socket,id=qmp,path=/var/run/qemu-server/1000.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/1000.pid -daemonize -smbios 'type=1,uuid=79cd9f1a-74ff-424a-a63c-90604bfa96a0' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/1000.vnc,x509,password -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer' -m 4096 -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -readconfig /usr/share/qemu-server/pve-usb.cfg -chardev 'spicevmc,id=usbredirchardev0,name=usbredir' -device 'usb-redir,chardev=usbredirchardev0,id=usbredirdev0,bus=ehci.0' -chardev 'spicevmc,id=usbredirchardev1,name=usbredir' -device 'usb-redir,chardev=usbredirchardev1,id=usbredirdev1,bus=ehci.0' -device 'qxl-vga,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/1000.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:18a69ffb9b90' -drive 'file=/mnt/pve/CD-Images/template/iso/virtio-win-0.1.126.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1' -drive 'file=rbd:HDD_Storage/vm-1000-disk-1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/HDD_Storage-VM.keyring,if=none,id=drive-scsi0,cache=unsafe,discard=on,format=raw,aio=threads,detect-zeroes=unmap' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -device 'virtio-scsi-pci,id=virtioscsi1,bus=pci.3,addr=0x2' -drive 'file=rbd:HDD_Storage/vm-1000-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/HDD_Storage-VM.keyring,if=none,id=drive-scsi1,cache=unsafe,discard=on,format=raw,aio=threads,detect-zeroes=unmap' -device 'scsi-hd,bus=virtioscsi1.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -netdev 'type=tap,id=net0,ifname=tap1000i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=1E:EA:9E:A1:7A:2F,netdev=net0,bus=pci.0,addr=0x12,id=net0' -netdev 'type=tap,id=net1,ifname=tap1000i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=9E:C5:EF:DA:43:AD,netdev=net1,bus=pci.0,addr=0x13,id=net1' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc' -global 'kvm-pit.lost_tick_policy=discard'' failed: got timeout
 
auch die lokalen storages sehen normal aus.
root@vm-1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 24G 0 24G 0% /dev
tmpfs 4.8G 506M 4.3G 11% /run
/dev/mapper/pve-root 7.1G 3.1G 3.7G 46% /
tmpfs 24G 66M 24G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 24G 0 24G 0% /sys/fs/cgroup
/dev/sdi2 511M 304K 511M 1% /boot/efi
/dev/fuse 30M 88K 30M 1% /etc/pve
/dev/sdd1 97M 5.5M 92M 6% /var/lib/ceph/osd/ceph-10
/dev/sdb1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-2
/dev/sdh1 97M 5.5M 92M 6% /var/lib/ceph/osd/ceph-31
/dev/sdg1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-30
/dev/sdc1 97M 5.5M 92M 6% /var/lib/ceph/osd/ceph-9
/dev/sdf1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-29
/dev/sde1 94M 5.5M 89M 6% /var/lib/ceph/osd/ceph-11
/dev/sda1 97M 5.5M 92M 6% /var/lib/ceph/osd/ceph-1
nasserver3:/raid0/data/_NAS_NFS_Exports_/Proxmox-Daily 37T 21T 16T 57% /mnt/pve/Backup-Daily
nasserver3:/raid0/data/_NAS_NFS_Exports_/Proxmox-weekly 37T 21T 16T 57% /mnt/pve/Backup-Weekly
nasserver3:/raid0/data/_NAS_NFS_Exports_/CD-Images 37T 21T 16T 57% /mnt/pve/CD-Images
nasserver3:/raid0/data/_NAS_NFS_Exports_/VM-HDDs 37T 21T 16T 57% /mnt/pve/Test
tmpfs
Ich sehe auch keine Prozesse die auf Storage warten:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 24.36 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 22.64 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
4081 be/4 ceph 0.00 B/s 561.45 K/s 0.00 % 31.98 % ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph [bstore_kv_sync]
3849 be/4 ceph 0.00 B/s 6.11 M/s 0.00 % 25.66 % ceph-osd -f --cluster ceph --id 31 --setuser ceph --setgroup ceph [bstore_kv_sync]
3807 be/4 ceph 0.00 B/s 7.21 M/s 0.00 % 0.00 % ceph-osd -f --cluster ceph --id 30 --setuser ceph --setgroup ceph [bstore_kv_sync]
4440 be/4 ceph 0.00 B/s 5.48 M/s 0.00 % 0.00 % ceph-osd -f --cluster ceph --id 11 --setuser ceph --setgroup ceph [bstore_kv_sync]
4819 be/4 ceph 0.00 B/s 2.27 M/s 0.00 % 0.00 % ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph [tp_osd_tp]
4821 be/4 ceph 0.00 B/s 882.27 K/s 0.00 % 0.00 % ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph [tp_osd_tp]
4703 be/4 ceph 0.00 B/s 882.27 K/s 0.00 % 0.00 % ceph-osd -f --cluster ceph --id 31 --setuser ceph --setgroup ceph [tp_osd_tp]
4384 be/4 ceph 0.00 B/s 1042.69 K/s 0.00 % 0.00 % ceph-osd -f --cluster ceph --id 29 --setuser ceph --setgroup ceph [bstore_kv_sync]
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0]
4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H]
Lokaler Storage wird auch gar nicht benutzt für VMs, nur das PVE liegt auf einem lokalen Speicher. Merkwürdig ist auch, das das Problem auf allen Servern im Cluster gleichzeitig aufgetreten ist.
 
Hmm, auch keine Prozesse im "D" state?
Code:
ps auxf | awk '{state=$8; if (state == "D") print $0;}'

Lokaler Storage wird auch gar nicht benutzt für VMs, nur das PVE liegt auf einem lokalen Speicher. Merkwürdig ist auch, das das Problem auf allen Servern im Cluster gleichzeitig aufgetreten ist.

Hmm, ok, sehr merkwürdig. Naja, lokaler storage wird schon indirekt von den services benützt, z.B. für locking oder indirekt durch das cluster configuration filesystem (pmxcfs), aber dass da auf allen nodes gleichzeitig was ist.

Der pvestatd sieht verdächtig aus. Ich traue mich aber nicht, den jetzt neu zu starten. Auf dem Host laufen VMs die wir für den Betrieb brauchen und ich kann die VMs gerade ja nicht migrieren.

pvestatd können sie relativ ruhig restarten, bin aber eher nicht der Meinung dass das was bringen wird.
Problem könnte da schon vom pmxcfs ausgehen, die anderen daemons die da hängen greifen alle früher oder später mal auf /etc/pve vom pmxcfs zu...
 
Okay, also es gibt genau einen Prozess mit Status "D"
Code:
1309332 ?        Ds     0:00 /usr/bin/perl -T /usr/bin/pvesr run --mail 1
Auf vm-2 habe ich vorhin ausprobiert den pvesr zu stoppen mit systemctl stop pvesr.service. Nach etwa 15min hat dann die Konsole wieder reagiert, nachdem ich zwischenzeitlich den entsprechenden Prozess versucht habe zu killen.
Kurzzeitig hatten sich dann 2 Hosts wieder beruhigt, jetzt hängt aber wieder alles.
Ich glaube, mit pvesr haben wir den schuldigen gefunden... Bleibt nur die Frage was den Dienst so quält, das er nicht mehr mag.
 
Nochmal ein Nachtrag:
ich wollte wissen ob das pmxcfs funktioniert.
root@vm-2:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-01-03 09:39:18 CET; 2 months 3 days ago
Main PID: 2545 (pmxcfs)
Tasks: 13 (limit: 7372)
Memory: 586.6M
CPU: 2h 52min 1.037s
CGroup: /system.slice/pve-cluster.service
└─2545 /usr/bin/pmxcfs

Mar 08 08:07:09 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2274/0000000D)
Mar 08 08:07:09 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/0000000C)
Mar 08 08:07:09 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2274/0000000E)
Mar 08 08:07:09 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/0000000D)
Mar 08 08:07:09 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/0000000E)
Mar 08 08:07:13 vm-2 pmxcfs[2545]: [status] notice: members: 1/2274, 2/2545, 4/258432, 5/260810, 6/2033, 8/249898
Mar 08 08:07:13 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/0000000F)
Mar 08 08:07:13 vm-2 pmxcfs[2545]: [status] notice: members: 1/2274, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 08 08:07:13 vm-2 pmxcfs[2545]: [status] notice: queue not emtpy - resening 24455 messages
Mar 08 08:07:13 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2274/00000010)
Sieht für mich eigentlich gut aus. Aber:
Wenn ich unter /etc/pve eine Datei erstellen möchte klappt das nicht. Ein touch test.txt bleibt eiskalt hängen. daher neue Theorie meinerseits:
Das Cluster Filesystem hängt, weil der pvesr das Filesystem nicht über die Hosts synchronisiert.
 
Ach, auf einigen Servern gibt es doch ein paar andere Prozesse. Nachdem vm-1, vm-2 und vm-8 kurzzeitig okay zu sein schienen haben die mit dem Backup angefangen das eigentlich gestern Abend hätte anlaufen sollen.

Die Backup jobs hängen aber auf allen Hosts.

Edit:
Ich hab jetzt der Vollständigkeit halber einmal auf allen Servern alle Prozesse gesucht die D, D+ oder Ds haben:
VM-1:
Code:
root@vm-1:~# ps auxf | awk '{state=$8; if (state == "D" || state == "D+" || state == "Ds") print $0;}'
root     1309332  0.0  0.1 494228 86360 ?        Ds   08:08   0:00 /usr/bin/perl -T /usr/bin/pvesr run --mail 1
VM-2:
Code:
root@vm-2:~# ps auxf | awk '{state=$8; if (state == "D" || state == "D+" || state == "Ds") print $0;}'
root     2019701  0.0  0.0 494272 86148 pts/0    D+   08:09   0:00  |       \_ /usr/bin/perl -T /usr/bin/pvesr
root        3612  0.0  0.0 525764 11820 ?        Ds   Jan03   7:31 pve-ha-crm
root     2019211  0.0  0.0 478248 76152 ?        D    08:07   0:00      \_ /usr/bin/perl /usr/sbin/qm set 101 --lock backup
root     2019310  0.0  0.0 494252 86124 ?        Ds   08:08   0:00 /usr/bin/perl -T /usr/bin/pvesr run --mail 1
VM-3:
Code:
root@vm-3:~# ps auxf | awk '{state=$8; if (state == "D" || state == "D+" || state == "Ds") print $0;}'
root     2965496  0.0  0.0 489784 80308 pts/0    D+   07:23   0:00  |       \_ /usr/bin/perl /usr/bin/pvestatd status
root     2797149  0.0  0.0 494976 86876 ?        D    Mar07   0:00  |       \_ /usr/bin/perl -T /usr/bin/vzdump 206 214 311 312 313 315 810 811 104 101 204 109 103 105 --storage Backup-Daily --quiet 1 --compress lzo --mode snapshot --mailnotification failure --mailto it-support@langeoog.de
root     2864885  0.0  0.0 494976 86844 ?        D    00:00   0:00          \_ /usr/bin/perl -T /usr/bin/vzdump 108 --quiet 1 --compress lzo --storage Backup-Daily --mailnotification failure --mailto it-support@langeoog.de --mode snapshot
root     2745106  0.0  0.0 494248 86616 ?        Ds   Mar07   0:00 /usr/bin/perl -T /usr/bin/pvesr run --mail 1
root     2935389  0.0  0.0 523280 92352 ?        Ds   05:12   0:00 /usr/bin/perl /usr/bin/pveupdate
root     2952220  0.0  0.0 497972 88228 ?        Ds   06:25   0:00 /usr/bin/perl -T /usr/bin/pveproxy restart
VM-4:
Code:
root@vm-4:~# ps auxf | awk '{state=$8; if (state == "D" || state == "D+" || state == "Ds") print $0;}'
root     2109601  0.0  0.2 494932 87492 ?        D    Mar07   0:00  |       \_ /usr/bin/perl -T /usr/bin/vzdump 206 214 311 312 313 315 810 811 104 101 204 109 103 105 --storage Backup-Daily --quiet 1 --compress lzo --mode snapshot --mailnotification failure --mailto it-support@langeoog.de
root     2176603  0.0  0.2 495060 86888 ?        D    00:00   0:00          \_ /usr/bin/perl -T /usr/bin/vzdump 108 --quiet 1 --compress lzo --storage Backup-Daily --mailnotification failure --mailto it-support@langeoog.de --mode snapshot
www-data 1928595  0.0  0.3 570568 123944 ?       D    Mar07   0:01  \_ pveproxy worker
root     2069249  0.0  0.2 494248 86304 ?        Ds   Mar07   0:00 /usr/bin/perl -T /usr/bin/pvesr run --mail 1
root     2256277  0.0  0.2 523260 91944 ?        Ds   05:56   0:00 /usr/bin/perl /usr/bin/pveupdate
root     2263029  0.0  0.2 498092 87852 ?        Ds   06:25   0:00 /usr/bin/perl -T /usr/bin/pveproxy restart
root     2278920  0.0  0.2 494280 86632 ?        Ds   07:34   0:00 /usr/bin/perl -T /usr/bin/pvesr run --mail 1
Naja auf den restlichen Servern wiederholt sich das...
 
Last edited:
Hmm private Nachrichten senden klappt irgendwie nicht, deshalb einmal hier kurz:
An wen muss ich mich eigentlich wenden, wenn ich möchte das hier sichtbar ist, das wir für unsere Server einen Subscrition abgeschlossen haben?
 
An wen muss ich mich eigentlich wenden, wenn ich möchte das hier sichtbar ist, das wir für unsere Server einen Subscrition abgeschlossen haben?

Einfach den Subscription Key zum passenden Teil bei "Personal Details" hinzufügen.
 
Wenn ich unter /etc/pve eine Datei erstellen möchte klappt das nicht. Ein touch test.txt bleibt eiskalt hängen. daher neue Theorie meinerseits:
Das Cluster Filesystem hängt, weil der pvesr das Filesystem nicht über die Hosts synchronisiert.

pvesr tut hier nichts falsches, es is auch nur ein Opfer. Das problem ist wie meine vermutung:
Problem könnte da schon vom pmxcfs ausgehen, die anderen daemons die da hängen greifen alle früher oder später mal auf /etc/pve vom pmxcfs zu..

Können sie mal etwas log durchgehen, evtl. hier posten?:
Code:
journalctl -u corosync -u pve-cluster --since="-1week"
(das since evntuell anpassen oder halt scrollen)
 
Einfach den Subscription Key zum passenden Teil bei "Personal Details" hinzufügen.
Danke, das hab ich anscheinend übersehen :rolleyes:

da ich VM-1 neugestartet habe gibts da nur
Code:
root@vm-1:/# journalctl -u corosync -u pve-cluster --since="-1week"
-- No entries --

VM-2 ist interessant, bringt mich persönlich einer Ursache aber nicht näher:
Mar 07 13:28:29 vm-2 pmxcfs[2545]: [dcdb] notice: data verification successful
Mar 07 13:33:58 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 13:48:58 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 14:03:58 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 14:18:58 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 14:28:29 vm-2 pmxcfs[2545]: [dcdb] notice: data verification successful
Mar 07 14:33:59 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 14:48:59 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 15:04:00 vm-2 pmxcfs[2545]: [status] notice: received log
Mar 07 15:09:19 vm-2 corosync[2794]: notice [TOTEM ] A processor failed, forming new configuration.
Mar 07 15:09:19 vm-2 corosync[2794]: [TOTEM ] A processor failed, forming new configuration.
Mar 07 15:09:25 vm-2 corosync[2794]: notice [TOTEM ] A new membership (192.168.16.2:475060) was formed. Members left: 1 3 4 5 6 8
Mar 07 15:09:25 vm-2 corosync[2794]: notice [TOTEM ] Failed to receive the leave message. failed: 1 3 4 5 6 8
Mar 07 15:09:25 vm-2 corosync[2794]: warning [CPG ] downlist left_list: 6 received
Mar 07 15:09:25 vm-2 corosync[2794]: notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
Mar 07 15:09:25 vm-2 corosync[2794]: notice [QUORUM] Members[1]: 2
Mar 07 15:09:25 vm-2 corosync[2794]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:25 vm-2 corosync[2794]: [TOTEM ] A new membership (192.168.16.2:475060) was formed. Members left: 1 3 4 5 6 8
Mar 07 15:09:25 vm-2 corosync[2794]: [TOTEM ] Failed to receive the leave message. failed: 1 3 4 5 6 8
Mar 07 15:09:25 vm-2 corosync[2794]: [CPG ] downlist left_list: 6 received
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [dcdb] notice: members: 2/2545
Mar 07 15:09:25 vm-2 corosync[2794]: [QUORUM] This node is within the non-primary component and will NOT provide any services.
Mar 07 15:09:25 vm-2 corosync[2794]: [QUORUM] Members[1]: 2
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [status] notice: members: 2/2545
Mar 07 15:09:25 vm-2 corosync[2794]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [status] notice: node lost quorum
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [dcdb] crit: received write while not quorate - trigger resync
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [dcdb] crit: leaving CPG group
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [dcdb] notice: start cluster connection
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [dcdb] notice: members: 2/2545
Mar 07 15:09:25 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:09:45 vm-2 corosync[2794]: notice [TOTEM ] A new membership (192.168.16.1:475064) was formed. Members joined: 1 3 4 5 6 8
Mar 07 15:09:45 vm-2 corosync[2794]: [TOTEM ] A new membership (192.168.16.1:475064) was formed. Members joined: 1 3 4 5 6 8
Mar 07 15:09:45 vm-2 corosync[2794]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 corosync[2794]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: members: 2/2545, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:45 vm-2 corosync[2794]: notice [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:09:45 vm-2 corosync[2794]: notice [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:09:45 vm-2 corosync[2794]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:45 vm-2 corosync[2794]: [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:09:45 vm-2 corosync[2794]: [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:09:45 vm-2 corosync[2794]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: cpg_send_message retried 1 times
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: node has quorum
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: members: 2/2545, 3/270012, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: members: 2/2545, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: starting data syncronisation
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: members: 2/2545, 5/260810, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: members: 2/2545, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: members: 2/2545, 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 2/2545/00000017)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 2/2545/00000018)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 2/2545/00000015)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000000B)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000000C)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 2/2545/00000016)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000000D)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000000E)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 2/2545/00000017)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 2/2545/00000018)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2141/00000009)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2141/0000000A)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2141/0000000B)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2141/0000000C)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2141/0000000D)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received sync request (epoch 1/2141/0000000E)
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: received all states
Mar 07 15:09:45 vm-2 pmxcfs[2545]: [status] notice: all data is up to date
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 5/260810, 6/2033, 8/249898
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000000F)
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000010)
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: dfsm_deliver_queue: queue length 1
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: remove message from non-member 5/260810
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 6/2033, 8/249898
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000011)
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000012)
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:50 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 5/260810, 6/2033, 8/249898
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000013)
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000014)
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: dfsm_deliver_queue: queue length 1
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: remove message from non-member 5/260810
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 6/2033, 8/249898
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000015)
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000016)
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:56 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 5/260810, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000017)
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000018)
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000019)
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000001A)
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 5/260810, 6/2033, 8/249898
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000001B)
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/0000001C)
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 6/2033, 8/249898
Mar 07 15:10:08 vm-2 pmxcfs[2545]: [dcdb] notice: starting data syncronisation
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received sync request (epoch 1/2141/00000018)
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: received all states
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: leader is 1/2141
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:02 vm-2 pmxcfs[2545]: [dcdb] notice: all data is up to date
...
Mar 07 12:50:15 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:03:58 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:05:15 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:18:58 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:20:16 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:28:29 vm-3 pmxcfs[270012]: [dcdb] notice: data verification successful
Mar 07 13:33:58 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:35:17 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:48:58 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 13:50:17 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:03:58 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:05:17 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:18:58 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:20:19 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:28:29 vm-3 pmxcfs[270012]: [dcdb] notice: data verification successful
Mar 07 14:33:59 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:35:19 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:48:59 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:50:19 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 14:54:01 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 15:04:00 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 15:05:20 vm-3 pmxcfs[270012]: [status] notice: received log
Mar 07 15:09:19 vm-3 corosync[272950]: notice [TOTEM ] A processor failed, forming new configuration.
Mar 07 15:09:19 vm-3 corosync[272950]: [TOTEM ] A processor failed, forming new configuration.
Mar 07 15:09:25 vm-3 corosync[272950]: notice [TOTEM ] A new membership (192.168.16.3:475060) was formed. Members left: 1 2 8
Mar 07 15:09:25 vm-3 corosync[272950]: notice [TOTEM ] Failed to receive the leave message. failed: 1 2 8
Mar 07 15:09:25 vm-3 corosync[272950]: [TOTEM ] A new membership (192.168.16.3:475060) was formed. Members left: 1 2 8
Mar 07 15:09:25 vm-3 corosync[272950]: [TOTEM ] Failed to receive the leave message. failed: 1 2 8
Mar 07 15:09:25 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 corosync[272950]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:25 vm-3 corosync[272950]: notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
Mar 07 15:09:25 vm-3 corosync[272950]: notice [QUORUM] Members[4]: 3 4 5 6
Mar 07 15:09:25 vm-3 corosync[272950]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:25 vm-3 corosync[272950]: [QUORUM] This node is within the non-primary component and will NOT provide any services.
Mar 07 15:09:25 vm-3 corosync[272950]: [QUORUM] Members[4]: 3 4 5 6
Mar 07 15:09:25 vm-3 corosync[272950]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: cpg_send_message retried 1 times
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: node lost quorum
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: members: 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: starting data syncronisation
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000002E)
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 3/270012/0000002E)
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: dfsm_deliver_queue: queue length 4
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] crit: received write while not quorate - trigger resync
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] crit: leaving CPG group
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: received all states
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: all data is up to date
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [status] notice: dfsm_deliver_queue: queue length 35
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: start cluster connection
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012
Mar 07 15:09:25 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 5/260810
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000032)
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 4/258432
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000034)
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] crit: ignore sync request from wrong member 4/258432
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 4/258432/0000002E)
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000036)
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:26 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 5/260810, 6/2033
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000037)
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000038)
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 4/258432, 6/2033
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000039)
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000003A)
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:32 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 5/260810, 6/2033
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000003B)
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000003C)
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 4/258432, 6/2033
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000003D)
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000003E)
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:38 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 5/260810, 6/2033
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/0000003F)
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000040)
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 4/258432, 6/2033
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000041)
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 3/270012/00000042)
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 3/270012
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 3/270012, 6/2033
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: start sending inode updates
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: sent all (0) updates
Mar 07 15:09:44 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:45 vm-3 corosync[272950]: notice [TOTEM ] A new membership (192.168.16.1:475064) was formed. Members joined: 1 2 8
Mar 07 15:09:45 vm-3 corosync[272950]: [TOTEM ] A new membership (192.168.16.1:475064) was formed. Members joined: 1 2 8
Mar 07 15:09:45 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 corosync[272950]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: members: 2/2545, 3/270012, 6/2033
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: members: 2/2545, 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-3 corosync[272950]: notice [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:09:45 vm-3 corosync[272950]: notice [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:09:45 vm-3 corosync[272950]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: starting data syncronisation
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 15:09:45 vm-3 corosync[272950]: [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:09:45 vm-3 corosync[272950]: [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: node has quorum
Mar 07 15:09:45 vm-3 corosync[272950]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 2/2545/00000017)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 2/2545/00000018)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 2/2545/00000015)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 1/2141/0000000B)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 1/2141/0000000C)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 2/2545/00000016)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 1/2141/0000000D)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 1/2141/0000000E)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 2/2545/00000017)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 2/2545/00000018)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 1/2141/00000009)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 1/2141/0000000A)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 1/2141/0000000B)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 1/2141/0000000C)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 1/2141/0000000D)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received sync request (epoch 1/2141/0000000E)
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: received all states
Mar 07 15:09:45 vm-3 pmxcfs[270012]: [status] notice: all data is up to date
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 5/260810, 6/2033, 8/249898
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 1/2141/0000000F)
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: received sync request (epoch 1/2141/00000010)
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: received all states
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: leader is 1/2141
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: synced members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:50 vm-3 pmxcfs[270012]: [dcdb] notice: all data is up to date
 
VM-4 ist sehr interessant, leider war der Post zu lang, deshalb ein neuer:
Interessant ist diese Meldung: "Cant' m'cast to group"

Mar 07 15:05:20 vm-4 pmxcfs[258432]: [status] notice: received log
Mar 07 15:09:19 vm-4 corosync[261273]: notice [TOTEM ] A processor failed, forming new configuration.
Mar 07 15:09:19 vm-4 corosync[261273]: [TOTEM ] A processor failed, forming new configuration.
Mar 07 15:09:25 vm-4 corosync[261273]: notice [TOTEM ] A new membership (192.168.16.3:475060) was formed. Members left: 1 2 8
Mar 07 15:09:25 vm-4 corosync[261273]: notice [TOTEM ] Failed to receive the leave message. failed: 1 2 8
Mar 07 15:09:25 vm-4 corosync[261273]: [TOTEM ] A new membership (192.168.16.3:475060) was formed. Members left: 1 2 8
Mar 07 15:09:25 vm-4 corosync[261273]: [TOTEM ] Failed to receive the leave message. failed: 1 2 8
Mar 07 15:09:25 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 corosync[261273]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 corosync[261273]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 corosync[261273]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 corosync[261273]: [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 3 received
Mar 07 15:09:25 vm-4 pmxcfs[258432]: [dcdb] notice: members: 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:25 vm-4 pmxcfs[258432]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:25 vm-4 pmxcfs[258432]: [status] notice: members: 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:25 vm-4 pmxcfs[258432]: [status] notice: starting data syncronisation
Mar 07 15:09:25 vm-4 corosync[261273]: [QUORUM] This node is within the non-primary component and will NOT provide any services.
Mar 07 15:09:25 vm-4 corosync[261273]: notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
Mar 07 15:09:25 vm-4 corosync[261273]: notice [QUORUM] Members[4]: 3 4 5 6
Mar 07 15:09:25 vm-4 corosync[261273]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:25 vm-4 corosync[261273]: [QUORUM] Members[4]: 3 4 5 6
Mar 07 15:09:25 vm-4 corosync[261273]: [MAIN ] Completed service synchronization, ready to provide service.
[...]

[...]
Mar 07 15:09:44 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:09:44 vm-4 pmxcfs[258432]: [dcdb] crit: internal error - unknown mode 0
Mar 07 15:09:44 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group
Mar 07 15:09:45 vm-4 corosync[261273]: notice [TOTEM ] A new membership (192.168.16.1:475064) was formed. Members joined: 1 2 8
Mar 07 15:09:45 vm-4 corosync[261273]: [TOTEM ] A new membership (192.168.16.1:475064) was formed. Members joined: 1 2 8
Mar 07 15:09:45 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 corosync[261273]: [CPG ] downlist left_list: 0 received
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: members: 2/2545, 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: starting data syncronisation
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 15:09:45 vm-4 corosync[261273]: notice [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:09:45 vm-4 corosync[261273]: notice [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:09:45 vm-4 corosync[261273]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:45 vm-4 corosync[261273]: [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:09:45 vm-4 corosync[261273]: [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:09:45 vm-4 corosync[261273]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: node has quorum
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 2/2545/00000015)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 2/2545/00000016)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 2/2545/00000017)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] crit: ignore sync request from wrong member 2/2545
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 2/2545/00000018)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/00000009)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/0000000A)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/0000000B)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/0000000C)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/0000000D)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/0000000E)
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: received all states
Mar 07 15:09:45 vm-4 pmxcfs[258432]: [status] notice: all data is up to date
Mar 07 15:09:46 vm-4 corosync[261273]: error [CPG ] *** 0x564ede778430 can't mcast to group state:0, error:12
Mar 07 15:09:46 vm-4 corosync[261273]: [CPG ] *** 0x564ede778430 can't mcast to group state:0, error:12
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] notice: members: 3/270012, 4/258432, 6/2033
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] notice: received sync request (epoch 3/270012/00000041)
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] notice: members: 3/270012, 6/2033
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] notice: we (4/258432) left the process group
Mar 07 15:09:50 vm-4 corosync[261273]: error [CPG ] *** 0x564ede778430 can't mcast to group pve_dcdb_v1 state:1, error:12
Mar 07 15:09:50 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group
Mar 07 15:09:50 vm-4 corosync[261273]: [CPG ] *** 0x564ede778430 can't mcast to group pve_dcdb_v1 state:1, error:12
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 6/2033, 8/249898
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] notice: starting data syncronisation
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] notice: received sync request (epoch 1/2141/00000011)
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:09:56 vm-4 corosync[261273]: error [CPG ] *** 0x564ede778430 can't mcast to group pve_dcdb_v1 state:1, error:12
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] notice: we (4/258432) left the process group
Mar 07 15:09:56 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group
Mar 07 15:09:56 vm-4 corosync[261273]: [CPG ] *** 0x564ede778430 can't mcast to group pve_dcdb_v1 state:1, error:12
Mar 07 15:10:02 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:10:02 vm-4 pmxcfs[258432]: [dcdb] crit: internal error - unknown mode 0
Mar 07 15:10:02 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group
Mar 07 15:10:02 vm-4 corosync[261273]: error [CPG ] *** 0x564ede778430 can't mcast to group state:0, error:12
Mar 07 15:10:02 vm-4 corosync[261273]: [CPG ] *** 0x564ede778430 can't mcast to group state:0, error:12
Mar 07 15:10:07 vm-4 corosync[261273]: error [CPG ] *** 0x564ede778430 can't mcast to group state:0, error:12
Mar 07 15:10:07 vm-4 corosync[261273]: [CPG ] *** 0x564ede778430 can't mcast to group state:0, error:12
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 6/2033, 8/249898
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] notice: starting data syncronisation
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] notice: received sync request (epoch 1/2141/00000019)
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 6/2033, 8/249898
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] notice: we (4/258432) left the process group
Mar 07 15:10:08 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group

[...]

[...]
Mar 07 15:15:50 vm-4 pmxcfs[258432]: [dcdb] notice: we (4/258432) left the process group
Mar 07 15:15:50 vm-4 corosync[261273]: error [CPG ] *** 0x564ede778430 can't mcast to group pve_dcdb_v1 state:1, error:12
Mar 07 15:15:50 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group
Mar 07 15:15:50 vm-4 corosync[261273]: [CPG ] *** 0x564ede778430 can't mcast to group pve_dcdb_v1 state:1, error:12
Mar 07 15:15:56 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:15:56 vm-4 pmxcfs[258432]: [dcdb] crit: dfsm_node_info_lookup failed
Mar 07 15:15:56 vm-4 pmxcfs[258432]: [dcdb] crit: leaving CPG group
Mar 07 15:15:56 vm-4 systemd[1]: corosync.service: Main process exited, code=killed, status=11/SEGV
Mar 07 15:15:56 vm-4 systemd[1]: corosync.service: Unit entered failed state.
Mar 07 15:15:56 vm-4 systemd[1]: corosync.service: Failed with result 'signal'.
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [dcdb] crit: cpg_leave failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [dcdb] crit: cpg_leave failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [quorum] crit: quorum_dispatch failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [status] notice: node lost quorum
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [confdb] crit: cmap_dispatch failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [quorum] crit: quorum_initialize failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [quorum] crit: can't initialize service
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [confdb] crit: cmap_initialize failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [confdb] crit: can't initialize service
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [status] crit: cpg_dispatch failed: 2
Mar 07 15:15:58 vm-4 pmxcfs[258432]: [status] crit: cpg_leave failed: 2
Mar 07 15:15:59 vm-4 pmxcfs[258432]: [status] notice: start cluster connection
Mar 07 15:15:59 vm-4 pmxcfs[258432]: [status] crit: cpg_initialize failed: 2
Mar 07 15:15:59 vm-4 pmxcfs[258432]: [status] crit: can't initialize service
Mar 07 15:16:02 vm-4 pmxcfs[258432]: [dcdb] notice: start cluster connection
Mar 07 15:16:02 vm-4 pmxcfs[258432]: [dcdb] crit: cpg_initialize failed: 2
[...]

[...]
Mar 07 15:59:05 vm-4 pmxcfs[258432]: [status] crit: cpg_initialize failed: 2
Mar 07 15:59:06 vm-4 systemd[1]: Starting Corosync Cluster Engine...
Mar 07 15:59:06 vm-4 corosync[2068758]: [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [MAIN ] Corosync Cluster Engine ('2.4.4-dirty'): started and ready to provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: info [MAIN ] Corosync built-in features: dbus rdma monitoring watchdog systemd xmlconf qdevices qnetd snmp pie relro bindnow
Mar 07 15:59:06 vm-4 corosync[2068758]: [MAIN ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [MAIN ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [MAIN ] Please migrate config file to nodelist.
Mar 07 15:59:06 vm-4 corosync[2068758]: [MAIN ] Please migrate config file to nodelist.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [TOTEM ] Initializing transport (UDP/IP Multicast).
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Mar 07 15:59:06 vm-4 corosync[2068758]: [TOTEM ] Initializing transport (UDP/IP Multicast).
Mar 07 15:59:06 vm-4 corosync[2068758]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [TOTEM ] The network interface [192.168.16.4] is now up.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync configuration map access [0]
Mar 07 15:59:06 vm-4 corosync[2068758]: info [QB ] server name: cmap
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync configuration service [1]
Mar 07 15:59:06 vm-4 corosync[2068758]: info [QB ] server name: cfg
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Mar 07 15:59:06 vm-4 corosync[2068758]: info [QB ] server name: cpg
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync profile loading service [4]
Mar 07 15:59:06 vm-4 corosync[2068758]: [TOTEM ] The network interface [192.168.16.4] is now up.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync resource monitoring service [6]
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [WD ] Watchdog not enabled by configuration
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [WD ] resource load_15min missing a recovery key.
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [WD ] resource memory_used missing a recovery key.
Mar 07 15:59:06 vm-4 corosync[2068758]: info [WD ] no resources configured.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync watchdog service [7]
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [QUORUM] Using quorum provider corosync_votequorum
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Mar 07 15:59:06 vm-4 corosync[2068758]: info [QB ] server name: votequorum
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Mar 07 15:59:06 vm-4 corosync[2068758]: info [QB ] server name: quorum
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [TOTEM ] A new membership (192.168.16.4:475068) was formed. Members joined: 4
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync configuration map access [0]
Mar 07 15:59:06 vm-4 systemd[1]: Started Corosync Cluster Engine.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [QUORUM] Members[1]: 4
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: [QB ] server name: cmap
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync configuration service [1]
Mar 07 15:59:06 vm-4 corosync[2068758]: [QB ] server name: cfg
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Mar 07 15:59:06 vm-4 corosync[2068758]: [QB ] server name: cpg
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync profile loading service [4]
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync resource monitoring service [6]
Mar 07 15:59:06 vm-4 corosync[2068758]: [WD ] Watchdog not enabled by configuration
Mar 07 15:59:06 vm-4 corosync[2068758]: [WD ] resource load_15min missing a recovery key.
Mar 07 15:59:06 vm-4 corosync[2068758]: [WD ] resource memory_used missing a recovery key.
Mar 07 15:59:06 vm-4 corosync[2068758]: [WD ] no resources configured.
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync watchdog service [7]
Mar 07 15:59:06 vm-4 corosync[2068758]: [QUORUM] Using quorum provider corosync_votequorum
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Mar 07 15:59:06 vm-4 corosync[2068758]: [QB ] server name: votequorum
Mar 07 15:59:06 vm-4 corosync[2068758]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Mar 07 15:59:06 vm-4 corosync[2068758]: [QB ] server name: quorum
Mar 07 15:59:06 vm-4 corosync[2068758]: [TOTEM ] A new membership (192.168.16.4:475068) was formed. Members joined: 4
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [QUORUM] Members[1]: 4
Mar 07 15:59:06 vm-4 corosync[2068758]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [TOTEM ] A new membership (192.168.16.1:475080) was formed. Members joined: 1 2 3 5 6 8
Mar 07 15:59:06 vm-4 corosync[2068758]: [TOTEM ] A new membership (192.168.16.1:475080) was formed. Members joined: 1 2 3 5 6 8
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: warning [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: [CPG ] downlist left_list: 0 received
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:59:06 vm-4 corosync[2068758]: notice [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: [QUORUM] This node is within the primary component and will provide service.
Mar 07 15:59:06 vm-4 corosync[2068758]: [QUORUM] Members[7]: 1 2 3 4 5 6 8
Mar 07 15:59:06 vm-4 corosync[2068758]: [MAIN ] Completed service synchronization, ready to provide service.
Mar 07 15:59:08 vm-4 pmxcfs[258432]: [dcdb] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 15:59:08 vm-4 pmxcfs[258432]: [dcdb] notice: starting data syncronisation
Mar 07 15:59:08 vm-4 pmxcfs[258432]: [dcdb] notice: received sync request (epoch 1/2141/0000009C)
Mar 07 15:59:10 vm-4 pmxcfs[258432]: [status] notice: update cluster info (cluster name Langeoog, version = 11)
Mar 07 15:59:10 vm-4 pmxcfs[258432]: [status] notice: node has quorum
Mar 07 15:59:11 vm-4 pmxcfs[258432]: [status] notice: members: 1/2141, 2/2545, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 15:59:11 vm-4 pmxcfs[258432]: [status] notice: starting data syncronisation
Mar 07 15:59:12 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/00000016)
Mar 07 16:07:50 vm-4 pmxcfs[258432]: [dcdb] notice: members: 1/2141, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 16:07:50 vm-4 pmxcfs[258432]: [status] notice: members: 1/2141, 3/270012, 4/258432, 5/260810, 6/2033, 8/249898
Mar 07 16:07:50 vm-4 pmxcfs[258432]: [dcdb] notice: received sync request (epoch 1/2141/0000009D)
Mar 07 16:07:50 vm-4 pmxcfs[258432]: [status] notice: received sync request (epoch 1/2141/00000017)
Mar 07 16:07:50 vm-4 corosync[2068758]: notice [TOTEM ] A new membership (192.168.16.1:475084) was formed. Members left: 2
Mar 07 16:07:50 vm-4 corosync[2068758]: [TOTEM ] A new membership (192.168.16.1:475084) was formed. Members left: 2
 
Ich will ja nicht drängeln, aber hat jemand eine Idee wie wir das Problem wieder in den Griff bekommen?
So langsam muss der Cluster mal wieder in einen brauchbaren Zustand.

Zur Zeit funktionieren weder Backups, noch kann man Maschinen migrieren oder starten. Maschinen die einmal aus sind kriegt man nicht mehr gestartet, Maschinen neu zu starten trauen wir uns auch nicht.
Auf Dauer ist eine echt schwierige Situation.

Anscheinend ist das pmxcfs ja nicht mehr in Ordnung. Ich befürchte aber, wenn ich auf einem Host den Dienst pve-cluster neu starte, dass der dann nicht startet, so wie es beim Host VM-1 auch der Fall war. Erst nach einem Neustart lief der Dienst wieder. Da wir aber die VMs nicht migrieren können, kommt ein Neustart eines Hosts eigentlich nicht in Frage, weil wir dadurch Downtime haben, die wir momentan nicht hinnehmen können.

Ideen?
 
Hmm, wurde irgendwas am Netzwerk (z.B. switch tausch oder firmware upgrade) gemacht?
Etwas schwierig so schnell zu sagen, da müssten wir fast schon auf die Nodes drauf...

Irgendwas schein den state vom corosync/pmxcfs gestört zu haben, etwas was nicht direkt im log ersichtlich ist, ich mein der "cannot mcast" Fehler ist schon vertdächtig...

Benützt du HA? Falls nicht könntest du mal versuchen den cluster stack state zu reseten indem du auf allen nodes ein:
Code:
systemctl restart cororsync pve-cluster
ausführen, oder zumindest mal auf der unwichtigsten.

Falls seit letzen node reboot kein HA konfiguriert ist läuft kein Watchdog und das kann problemlos gemacht werden.
Normalerweise passiert mit den VMs und CTs durch das obigen Kommando auch nichts, d.h., die laufen weiter. "Normalerweise" sag ich hier weil wir ja noch nicht ganz zum schuldigen gekommen sind, d.h. potentiell ist alles möglich, aber sehr unwahrscheinlich.
 
Wenn es hilft, kann ich gerne auch einen von euch auf die Nodes drauf lassen. Ich würde euch dazu einen SSL-VPN Zugang schicken. Das könnten wir ja dann per PN klären.

Zum Netzwerk: Unser Netzwerk ist relativ komplex. Es gibt einen Haufen VLANS und eine Anzahl per LWL angebundener Gebäude. In einem der Gebäude habe ich genau zu dem fraglichen Zeitpunkt einen Switch getauscht. Das Corosync VLAN ist zwar über die VLAN Trunk Ports in dem Netzsegment verfügbar, aber auf keinem der Ports an dem fraglichen Switch ist dieses VLAN aufgeschaltet. Ich halte es für relativ unwahrscheinlich, das es etwas damit zu tun hat. ABER:
Es gab ein Phänomen. Ich hatte den Uplink zu dem Gebäude dann testweise deaktiviert weil ein Kollege der Meinung war es könnte daran liegen. Am Cluster tat sich da aber nichts. In dem Moment wo ich den Uplink wieder aktiviert hatte, hatte sich aber dann der Cluster für wenige Sekunden gefangen und ein paar Nodes wurden als OK angezeigt. Dieses Verhalten lies sich aber später nicht noch einmal reproduzieren. Deshalb würde ich das auch als "Zufall" einstufen.
Die Switches, an denen der Cluster angeschlossen ist, wurden zu dem Zeitpunkt weder konfiguriert noch irgendwie upgedated.

Wir haben uns genau wegen sowas gegen HA entschieden :D
Ich werde das resetten auf VM-1 nochmal versuchen. Da läuft wegen des Neustarts im Moment eh keine VM und dann gleich hier berichten.

Als Notfalllösung habe ich einen Plan gemacht:
Wir haben einen einzelnen Proxmox Server als absoluten Desaster Notfall Anker. Ich würde alle essentiell wichtigen Maschinen wie SQL Server, Domain Controller, DHCP/Radius und Anwendungsserver etc. in einer "Nach Feierabend Aktion" mithilfe eines Backups auf diesen umziehen, damit der Betrieb am nächsten Tag weiter gewährleistet ist. Dann hätte man etwas Zeit, den Cluster wieder in Ordnung zu bringen, bis die unwichtigeren Maschinen wieder gebraucht werden.
Dazu müsste ich das pmxfs nur so "manipulieren", dass ich wieder Schreibzugriff bekomme um ein Backup machen zu können. Oder ich müsste irgendwie die vHDDs aus dem Ceph raus bekommen um die Maschinen manuell auf den Notfall Node umzuziehen.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!