[SOLVED] Cluster-Ausfall - VMs nicht erreichbar

intecsoft

Member
Mar 9, 2021
25
4
8
33
Hallo zusammen,

am Wochenende ist unser Cluster für etwas mehr als 5 Stunden ausgefallen.
Die VMs selbst waren nicht eingefroren (atop in jeder VM hat weiter aufgezeichnet, syslog ebenso), hatten aber eine massiv erhöhte Load
und waren im Netzwer auch nicht erreichbar. Selbst die DNS-Server waren nicht erreichbar, was dann zu weiteren Problemen führte.

Die ersten Auffälligkeiten:

Code:
Mar  6 23:06:48 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/127: -1
Mar  6 23:06:48 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/149: -1
Mar  6 23:06:48 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/189: -1
Mar  6 23:06:48 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/151: -1
...
Mar  6 23:06:48 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-19/SSD: -1
Mar  6 23:06:48 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-19/local: -1
...
Mar  6 23:10:42 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-19/local: -1
Mar  6 23:10:42 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-19/SSD: -1
Mar  6 23:10:43 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-16/local: -1
Mar  6 23:10:43 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-16/SSD: -1
...

Mar  6 23:29:34 is-master-16 pvestatd[1329]: status update time (12.412 seconds)
Mar  6 23:29:34 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-19/local: -1
Mar  6 23:29:34 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-19/SSD: -1
Mar  6 23:29:34 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-23/SSD: -1
Mar  6 23:29:34 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-23/local: -1
Mar  6 23:29:34 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-16/local: -1
Mar  6 23:29:34 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/is-master-16/SSD: -1
Mar  6 23:29:54 is-master-16 ceph-osd[1224]: 2021-03-06 23:29:54.169 7f2390276700 -1 osd.6 8952 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.54028820.0:39896176 4.66 4:660059ab:::rbd_data.3851df7d467eb9.0000000000001b64:head [write 1798144~4096] snapc 0=[] ondisk+write+known_if_redirected e8952)
Mar  6 23:29:54 is-master-16 ceph-osd[1223]: 2021-03-06 23:29:54.637 7f01bdb67700 -1 osd.7 8952 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.54464350.0:94161981 4.24 4:2478474f:::rbd_data.2d24364056236a.0000000000000c01:head [write 3567616~4096] snapc 0=[] ondisk+write+known_if_redirected e8952)
Mar  6 23:29:55 is-master-16 pvestatd[1329]: status update time (10.280 seconds)
Mar  6 23:29:55 is-master-16 ceph-osd[1224]: 2021-03-06 23:29:55.217 7f2390276700 -1 osd.6 8952 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.54028820.0:39896176 4.66 4:660059ab:::rbd_data.3851df7d467eb9.0000000000001b64:head [write 1798144~4096] snapc 0=[] ondisk+write+known_if_redirected e8952)
...
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/133: /var/lib/rrdcached/db/pve2-vm/133: illegal attempt to update using time 1615070000 when last update time is 1615070000 (minimum one second step)

Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/119: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/127: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/183: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/143: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/122: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/123: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/120: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/121: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/158: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/187: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/126: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/149: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/189: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/189: /var/lib/rrdcached/db/pve2-vm/189: illegal attempt to update using time 1615070000 when last update time is 1615070000 (minimum one second step)
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/151: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/151: /var/lib/rrdcached/db/pve2-vm/151: illegal attempt to update using time 1615070000 when last update time is 1615070000 (minimum one second step)
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/128: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/128: /var/lib/rrdcached/db/pve2-vm/128: illegal attempt to update using time 1615070000 when last update time is 1615070000 (minimum one second step)
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/184: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/184: /var/lib/rrdcached/db/pve2-vm/184: illegal attempt to update using time 1615070000 when last update time is 1615070000 (minimum one second step)
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/125: -1
Mar  6 23:33:20 is-master-16 pmxcfs[1195]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/125: /var/lib/rrdcached/db/pve2-vm/125: illegal attempt to update using time 1615070000 when last update time is 1615070000 (minimum one second step)

Kurz zum Cluster:
Das Cluster besteht aus 4 Servern mit jeweils 2 OSDs für Ceph.
Die Netzwerke (Frontend und Backend) sind 10G-Netze, die dazugehörigen Backupnetze (active backup) 1G.
Letztere wurden allerdings laut übertragenen Daten (ifconfig) nicht genutzt.

Scrubbing läuft zwischen 22 und 6 Uhr.
Allerdings sehe ich hier an Hand der Stats keinen Zusammenhang:

Code:
2021-03-06 23:42:08.739129 mgr.is-master-23 (mgr.54458217) 598298 : cluster [DBG] pgmap v540549: 256 pgs: 1 active+clean+scrubbing+deep, 255 active+clean; 4.4 TiB data, 13 TiB used, 16 TiB / 29 TiB avail; 2.2 Mi
B/s rd, 6.1 MiB/s wr, 1.30k op/s

Die RRDC-Fehler waren weg, als ich am 7.3. einen Ceph-Manager neugestartet habe, weil eine "slow op" übrig blieb.

Verwendete Software:
Code:
{
    "mon": {
        "ceph version 14.2.16 (5d5ae817209e503a412040d46b3374855b7efe04) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.16 (5d5ae817209e503a412040d46b3374855b7efe04) nautilus (stable)": 3
    },
    "osd": {
        "ceph version 14.2.16 (5d5ae817209e503a412040d46b3374855b7efe04) nautilus (stable)": 8
    },
    "mds": {},
    "overall": {
        "ceph version 14.2.16 (5d5ae817209e503a412040d46b3374855b7efe04) nautilus (stable)": 14
    }
}
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
ceph: 14.2.16-pve1
ceph-fuse: 14.2.16-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-3
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-3
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

gegen 4:45 hat sich da Problem auf Cluster-Ebene von selbst gelöst (von 1 "slow ops" mal abgesehen).
Danach gab es nur noch Probleme mit Diensten innerhalb von VMs.

Hat jemand eine Idee dazu?
 
Ansonsten habe ich noch timeout von pvestatd gefunden, welche aber sporadisch immer wieder mal auftauchen:
Code:
Mar  2 14:42:50 is-master-16 pvestatd[1329]: got timeout
Mar  2 17:15:29 is-master-16 pvestatd[1329]: got timeout
Mar  3 02:46:19 is-master-16 pvestatd[1329]: got timeout
Mar  3 03:24:49 is-master-16 pvestatd[1329]: got timeout
Mar  3 10:11:59 is-master-16 pvestatd[1329]: got timeout
Mar  3 20:38:09 is-master-16 pvestatd[1329]: got timeout
Mar  4 02:34:00 is-master-16 pvestatd[1329]: got timeout
Mar  4 04:56:29 is-master-16 pvestatd[1329]: got timeout
Mar  4 10:26:50 is-master-16 pvestatd[1329]: got timeout
Mar  4 10:52:31 is-master-16 pvestatd[1329]: VM 163 qmp command failed - VM 163 qmp command 'query-proxmox-support' failed - unable to connect to VM 163 qmp socket - timeout after 31 retries
Mar  4 14:02:51 is-master-16 pvestatd[1329]: VM 163 qmp command failed - VM 163 qmp command 'query-proxmox-support' failed - unable to connect to VM 163 qmp socket - timeout after 31 retries
Mar  4 14:03:01 is-master-16 pvestatd[1329]: VM 163 qmp command failed - VM 163 qmp command 'query-proxmox-support' failed - unable to connect to VM 163 qmp socket - timeout after 31 retries
Mar  4 14:23:10 is-master-16 pvestatd[1329]: got timeout
Mar  4 14:23:40 is-master-16 pvestatd[1329]: got timeout
Mar  5 04:40:32 is-master-16 pvestatd[1329]: got timeout
Mar  5 07:00:42 is-master-16 pvestatd[1329]: got timeout
Mar  5 07:08:02 is-master-16 pvestatd[1329]: got timeout
Mar  5 22:00:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 02:50:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 05:47:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 07:00:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 09:45:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 10:54:13 is-master-16 pvestatd[1329]: got timeout
Mar  6 11:14:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 14:26:23 is-master-16 pvestatd[1329]: got timeout
Mar  6 15:38:04 is-master-16 pvestatd[1329]: got timeout
Mar  6 22:31:03 is-master-16 pvestatd[1329]: got timeout
Mar  6 23:04:25 is-master-16 pvestatd[1329]: got timeout
Mar  7 01:02:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 02:09:55 is-master-16 pvestatd[1329]: got timeout
Mar  7 02:15:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 02:23:27 is-master-16 pvestatd[1329]: got timeout
Mar  7 02:34:15 is-master-16 pvestatd[1329]: got timeout
Mar  7 02:39:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:14:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:17:25 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:28:17 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:31:02 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:41:49 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:44:34 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:50:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:55:27 is-master-16 pvestatd[1329]: got timeout
Mar  7 03:58:12 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:03:38 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:06:23 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:14:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:17:15 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:25:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:33:32 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:36:18 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:44:25 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:44:56 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:47:27 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:47:47 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:47:56 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:49:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:51:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:51:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:51:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:51:51 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:52:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:53:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:54:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:55:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:55:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:55:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:55:41 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:55:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:56:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:57:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:57:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:58:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 04:59:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:00:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:00:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:00:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:00:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:02:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:02:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:03:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:03:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:03:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:04:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:04:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:04:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:05:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:05:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:05:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:06:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:06:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:06:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:07:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:07:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:07:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:07:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:08:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:08:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:08:41 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:08:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:09:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:09:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:09:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:09:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:09:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:09:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:10:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:10:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:10:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:10:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:10:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:10:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:12:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:12:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:13:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:13:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:13:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:13:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:14:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:14:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:14:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:14:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:15:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:15:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:15:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:16:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:16:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:20:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:20:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:20:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:21:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:21:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:21:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:21:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:21:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:21:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:22:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:22:11 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:22:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:22:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:23:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:23:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:23:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:26:31 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:26:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:26:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:27:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:27:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:27:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:28:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:28:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:28:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:28:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:30:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:30:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:30:21 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:30:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:30:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:31:01 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:31:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:32:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:33:01 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:33:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:33:20 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:33:31 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:33:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:33:50 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:34:01 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:36:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:37:01 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:37:10 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:37:40 is-master-16 pvestatd[1329]: got timeout
Mar  7 05:42:00 is-master-16 pvestatd[1329]: got timeout
Mar  7 06:32:30 is-master-16 pvestatd[1329]: got timeout
Mar  7 14:15:13 is-master-16 pvestatd[1329]: got timeout
Mar  9 00:10:04 is-master-16 pvestatd[1329]: got timeout
Mar  9 02:35:34 is-master-16 pvestatd[1329]: got timeout
Mar  9 04:13:04 is-master-16 pvestatd[1329]: got timeout
Mar  9 06:29:34 is-master-16 pvestatd[1329]: got timeout
 
Nachdem ich jetzt alle Logs zusammengeschuppst und um den Startzeitraum analysiert habe, habe ich noch folgendes gefunden;
Code:
2021-03-06 23:04:30.595 7f9333625700  1 mon.is-master-19@0(electing) e5 collect_metadata :  no unique device id for : fallback method has no model nor serial'
2021-03-06 23:04:30.599 7f9332623700  1 mon.is-master-19@0(electing) e5 handle_auth_request failed to assign global_id
2021-03-06 23:04:30.599 7f9337e2e700  1 mon.is-master-19@0(electing) e5 handle_auth_request failed to assign global_id
Aber selbst nur zu diesem Problem wird man online nicht wirklich schlau

EDIT:
dieser Fehler ist vorher schon mal aufgetreten:
Code:
2021-03-06 07:00:27.331 7f9337e2e700  1 mon.is-master-19@0(electing) e5 handle_auth_request failed to assign global_id
 
Last edited:
Den zeitlichen Ablauf von Ceph-Fehlern konnte ich inzwischen auch rekonstruieren.
1. mehrere slow ops für 30 Minuten (ab 23:29)
2. "Long heartbeat ping times" auf back und fron interface für 53 Minuten
3. Monitor 1 down, dann Monitor 3 down, Monitor 1 down, dann Monitor 3 down (1:06 - 1:23)
4. 1 OSD down; objects degraded (1:35)
5. jetzt immer wieder mal 1 Monitor down (immer ein anderer); Quorum liegt bei den beiden übrigen
6. ab jetzt alle Fehler bunt gemischt bis zum Ende

Leider kann ich die Logs wegen der Zeichenbegrenzung nicht zeigen
 
Last edited:
Ich glaube, den Fehler gefunden zu haben.
Zumindest konnte ich die Logeinträge und auch einen gelegentlichen Neustart durch den Watchdog
nachstellen.
Ursächlich ist mutmaßlich Nagios, welches die Abfragen per SSH absetzt. Ich hatte, damit die Abfragen unter Last noch sauber funktionieren
und nicht gleich in einen Timeout rennen, die Prozesse des lokalen Nutzers mit einem Nice-Wert von -5 versehen gehabt.
Ist jetzt ein Server unter Last, was EA-bedingt vor allem mit Ceph schneller mal passiert, und Nagios kommt an
und will noch alle Status-Werte (u.a. von Ceph), werden die Abfragen natürlich höher priorisiert, wodurch das Cluster
auf der Strecke bleibt.
Seit Nagios wieder mit 0 und der Watchdog mit -15 läuft, ist weder das Problem nachstellbar, noch die oben genannten Logeinträge vorhanden, noch startete ein Server watchdogbedingt neu, wenn eine HA-VM darauf läuft.

Es war also ein hausgemachtes Problem, welches hier hätte keiner finden können.
Aber vielleicht hilft es mal anderen irgendwann.
 
  • Like
Reactions: noPa$$word

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!