HEALTH_WARN - daemons have recently crashed

Oct 2, 2019
22
2
23
Liebe Leute,

ich bekommen nun seit wenigen Tagen ca. 1-2 mal täglich die Fehlermeldung:

HEALTH_WARN - daemons have recently crashed

Mon's, Mgr's und OSD's sind onlinen und alles läuft meines erachtens einwandfrei.

# mit ceph crash archive-all

deaktviere ich die Fehlermeldungen. Nun bin ich jedoch neugierig was den Fehler verursacht. Der Fehler tritt bei unterschiedlichen Knoten auf.

# ceph crash info <ID>

gibt folgende Fehler aus:

Node4
{
"os_version_id": "10",
"utsname_machine": "x86_64",
"entity_name": "mon.promo4",
"backtrace": [
"(()+0x12730) [0x7f30ca142730]",
"(gsignal()+0x10b) [0x7f30c9c257bb]",
"(abort()+0x121) [0x7f30c9c10535]",
"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a3) [0x7f30cb27be79]",
"(()+0x282000) [0x7f30cb27c000]",
"(Paxos::store_state(MMonPaxos*)+0xaa8) [0x5602540626f8]",
"(Paxos::handle_commit(boost::intrusive_ptr<MonOpRequest>)+0x2ea) [0x560254062a5a]",
"(Paxos::dispatch(boost::intrusive_ptr<MonOpRequest>)+0x223) [0x560254068213]",
"(Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x131c) [0x560253f9db1c]",
"(Monitor::_ms_dispatch(Message*)+0x4aa) [0x560253f9e10a]",
"(Monitor::ms_dispatch(Message*)+0x26) [0x560253fcda36]",
"(Dispatcher::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x26) [0x560253fc9f66]",
"(DispatchQueue::entry()+0x1a49) [0x7f30cb4b4e69]",
"(DispatchQueue::DispatchThread::entry()+0xd) [0x7f30cb5629ed]",
"(()+0x7fa3) [0x7f30ca137fa3]",
"(clone()+0x3f) [0x7f30c9ce74cf]"
],
"process_name": "ceph-mon",
"assert_line": 485,
"archived": "2020-01-21 07:02:49.036123",
"assert_file": "/mnt/npool/tlamprecht/pve-ceph/ceph-14.2.6/src/common/ceph_time.h",
"utsname_sysname": "Linux",
"os_version": "10 (buster)",
"os_id": "10",
"assert_msg": "/mnt/npool/tlamprecht/pve-ceph/ceph-14.2.6/src/common/ceph_time.h: In function 'ceph::time_detail::timespan ceph::to_timespan(ceph::time_detail::signedspan)' thread 7f30c11fe700 time 2020-01-21 03:43:48.848411\n/mnt/npool/tlamprecht/pve-ceph/ceph-14.2.6/src/common/ceph_time.h: 485: FAILED ceph_assert(z >= signedspan::zero())\n",
"assert_func": "ceph::time_detail::timespan ceph::to_timespan(ceph::time_detail::signedspan)",
"ceph_version": "14.2.6",
"os_name": "Debian GNU/Linux 10 (buster)",
"timestamp": "2020-01-21 02:43:48.891122Z",
"assert_thread_name": "ms_dispatch",
"utsname_release": "5.3.13-1-pve",
"utsname_hostname": "promo4",
"crash_id": "2020-01-21_02:43:48.891122Z_0aade13c-463f-43fe-9b05-76ca71f6bc1b",
"assert_condition": "z >= signedspan::zero()",
"utsname_version": "#1 SMP PVE 5.3.13-1 (Thu, 05 Dec 2019 07:18:14 +0100)"
}

Node2
{
"os_version_id": "10",
"utsname_machine": "x86_64",
"entity_name": "mon.promo2",
"backtrace": [
"(()+0x12730) [0x7f74f6c3f730]",
"(gsignal()+0x10b) [0x7f74f67227bb]",
"(abort()+0x121) [0x7f74f670d535]",
"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a3) [0x7f74f7d78e79]",
"(()+0x282000) [0x7f74f7d79000]",
"(Paxos::store_state(MMonPaxos*)+0xaa8) [0x55b9540ae6f8]",
"(Paxos::handle_commit(boost::intrusive_ptr<MonOpRequest>)+0x2ea) [0x55b9540aea5a]",
"(Paxos::dispatch(boost::intrusive_ptr<MonOpRequest>)+0x223) [0x55b9540b4213]",
"(Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x131c) [0x55b953fe9b1c]",
"(Monitor::_ms_dispatch(Message*)+0x4aa) [0x55b953fea10a]",
"(Monitor::ms_dispatch(Message*)+0x26) [0x55b954019a36]",
"(Dispatcher::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x26) [0x55b954015f66]",
"(DispatchQueue::entry()+0x1a49) [0x7f74f7fb1e69]",
"(DispatchQueue::DispatchThread::entry()+0xd) [0x7f74f805f9ed]",
"(()+0x7fa3) [0x7f74f6c34fa3]",
"(clone()+0x3f) [0x7f74f67e44cf]"
],
"process_name": "ceph-mon",
"assert_line": 485,
"archived": "2020-01-21 07:02:49.041386",
"assert_file": "/mnt/npool/tlamprecht/pve-ceph/ceph-14.2.6/src/common/ceph_time.h",
"utsname_sysname": "Linux",
"os_version": "10 (buster)",
"os_id": "10",
"assert_msg": "/mnt/npool/tlamprecht/pve-ceph/ceph-14.2.6/src/common/ceph_time.h: In function 'ceph::time_detail::timespan ceph::to_timespan(ceph::time_detail::signedspan)' thread 7f74edcfb700 time 2020-01-20 22:32:56.933800\n/mnt/npool/tlamprecht/pve-ceph/ceph-14.2.6/src/common/ceph_time.h: 485: FAILED ceph_assert(z >= signedspan::zero())\n",
"assert_func": "ceph::time_detail::timespan ceph::to_timespan(ceph::time_detail::signedspan)",
"ceph_version": "14.2.6",
"os_name": "Debian GNU/Linux 10 (buster)",
"timestamp": "2020-01-20 21:32:56.947402Z",
"assert_thread_name": "ms_dispatch",
"utsname_release": "5.3.13-1-pve",
"utsname_hostname": "promo2",
"crash_id": "2020-01-20_21:32:56.947402Z_3ae7220c-23c9-478a-a22d-626c2fa34414",
"assert_condition": "z >= signedspan::zero()",
"utsname_version": "#1 SMP PVE 5.3.13-1 (Thu, 05 Dec 2019 07:18:14 +0100)"
}


Das sind zwei Ausgaben von unterschiedlichen Crash Reports.

Vielleicht hat ja jemand eine Idee.

LG
ff
 
Vorab, bitte poste Output in CODE tags (sind unter den drei Punkten im Editor), damit lässt formatierter Text leichter lesen.

Ist die Zeit auf allen Nodes im Cluster gleich?
 
hey,

danke für deine antwort. ja die zeit ist auf allen servern gleich. ich synchronisiere sie via chrony gegen unseren ntp server.

ich habe hier einen bugtracker gefunden bzw. wurde empfohlen:

https://tracker.ceph.com/issues/43365

ähnliches/gleiches problem, allerdings keine lösung.

zur info, all meine proxmox knoten sind aktuell vom patch level.

LG
 
Code:
# pveversion -v

proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-3
pve-kernel-helper: 6.1-3
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-3-pve: 5.3.13-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph: 14.2.6-pve1
ceph-fuse: 14.2.6-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-11
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-4
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

ja nur die monitore, zum beispiel aktuell:

siehe anhang.

ich habe es einfach mal laufen lassen. die cluster funktioniert einwandfrei. nur kommen die meldungen, "drücke" ich sie weg, kommen sie nach einiger zeit wieder.

lg
 

Attachments

  • 1.PNG
    1.PNG
    27.5 KB · Views: 12
  • 2.PNG
    2.PNG
    31.2 KB · Views: 13

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!