Ceph OSD Prozess spontan gestorben

Ingo S

Renowned Member
Oct 16, 2016
348
42
93
41
Hallo zusammen

Gestern hat sich eine OSD spontan aus dem Cluster verabschiedet, ohne defekt zu sein. Der Dienst wurde noch als "running" angezeigt, aber in der Übersicht ist sie als "down" und "out" markiert. Außerdem gibt es für die OSD eine slow ops meldung.

Der Kernel hat zu dem Zeitpunkt folgendes gemeldet:

[Oct28 16:51] BUG: kernel NULL pointer dereference, address: 00000000000000c0
[ +0.000040] #PF: supervisor read access in kernel mode
[ +0.000020] #PF: error_code(0x0000) - not-present page
[ +0.000020] PGD 0 P4D 0
[ +0.000014] Oops: 0000 [#1] SMP PTI
[ +0.000016] CPU: 47 PID: 211414 Comm: lxcfs Tainted: P O 5.11.22-4-pve #1
[ +0.000030] Hardware name: Supermicro X10DRi/X10DRi, BIOS 2.0 12/28/2015
[ +0.000024] RIP: 0010:blk_mq_put_rq_ref+0xa/0x60
[ +0.000024] Code: 15 0f b6 d3 4c 89 e7 be 01 00 00 00 e8 cf fe ff ff 5b 41 5c 5d c3 0f 0b 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 8b 47 10 <48> 8b 80 c0 00 00 00 48 89 e5 48 3b 78 40 74 1f 4c 8d 87 e8 00 00
[ +0.000057] RSP: 0018:ffffa5f7b3b17b08 EFLAGS: 00010206
[ +0.000020] RAX: 0000000000000000 RBX: ffffa5f7b3b17b88 RCX: 0000000000000002
[ +0.000024] RDX: 0000000000000001 RSI: 0000000000000206 RDI: ffff89c2118a6c00
[ +0.000025] RBP: ffffa5f7b3b17b40 R08: 0000000000000000 R09: 000000000000001a
[ +0.000025] R10: 0000000000000088 R11: 000000000000000f R12: ffff89c2118a6c00
[ +0.000024] R13: ffff89c211a54400 R14: 0000000000000000 R15: 0000000000000001
[ +0.000025] FS: 00007f40e9e2c700(0000) GS:ffff89e13fdc0000(0000) knlGS:0000000000000000
[ +0.000027] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ +0.000021] CR2: 00000000000000c0 CR3: 00000001303d0002 CR4: 00000000003726e0
[ +0.000025] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ +0.000023] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ +0.000025] Call Trace:
[ +0.000013] ? bt_iter+0x54/0x90
[ +0.000016] blk_mq_queue_tag_busy_iter+0x1a2/0x2d0
[ +0.000020] ? blk_add_rq_to_plug+0x50/0x50
[ +0.000025] ? blk_add_rq_to_plug+0x50/0x50
[ +0.000018] blk_mq_in_flight+0x38/0x60
[ +0.000018] diskstats_show+0x159/0x300
[ +0.000018] seq_read_iter+0x2c6/0x4b0
[ +0.000017] proc_reg_read_iter+0x51/0x80
[ +0.000019] new_sync_read+0x10d/0x190
[ +0.000018] vfs_read+0x15a/0x1c0
[ +0.000015] ksys_read+0x67/0xe0
[ +0.000017] __x64_sys_read+0x1a/0x20
[ +0.000017] do_syscall_64+0x38/0x90
[ +0.000017] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ +0.000024] RIP: 0033:0x7f40eb7d4ecc
[ +0.000016] Code: ec 28 48 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 c9 5e f9 ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 34 44 89 c7 48 89 44 24 08 e8 ff 5e f9 ff 48
[ +0.000057] RSP: 002b:00007f40e9e2b9d0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[ +0.000027] RAX: ffffffffffffffda RBX: 00007f40d4000db0 RCX: 00007f40eb7d4ecc
[ +0.000024] RDX: 0000000000000400 RSI: 00007f40e0027050 RDI: 0000000000000008
[ +0.000025] RBP: 00007f40eb8a64a0 R08: 0000000000000001 R09: 00007f4094000080
[ +0.000024] R10: 0000000000000200 R11: 0000000000000246 R12: 0000000000000043
[ +0.000024] R13: 00007f40eb8a58a0 R14: 0000000000000d68 R15: 0000000000000d68
[ +0.000025] Modules linked in: uas usb_storage binfmt_misc tcp_diag inet_diag rbd libceph veth rpcsec_gss_krb5 auth_rpcgss nfsv4 nfsv3 nfs_acl nfs lockd grace nfs_ssc fscache ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables sctp ip6_udp_tunnel udp_tunnel iptable_filter bpfilter softdog bonding tls nfnetlink_log nfnetlink ipmi_ssif intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl ast drm_vram_helper drm_ttm_helper intel_cstate ttm joydev pcspkr input_leds efi_pstore drm_kms_helper cec rc_core fb_sys_fops syscopyarea sysfillrect sysimgblt mxm_wmi mei_me ioatdma mei acpi_ipmi ipmi_si ipmi_devintf ipmi_msghandler mac_hid acpi_power_meter acpi_pad zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp
[ +0.000079] libiscsi_tcp libiscsi scsi_transport_iscsi drm sunrpc ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq hid_generic usbmouse usbkbd dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c usbhid hid crc32_pclmul megaraid_sas xhci_pci xhci_pci_renesas i2c_i801 nvme ahci i2c_smbus lpc_ich ixgbe igb xhci_hcd libahci xfrm_algo i2c_algo_bit dca nvme_core mdio wmi
[ +0.007858] CR2: 00000000000000c0
[ +0.001381] ---[ end trace 8a307a407d905a1b ]---
[ +0.078750] RIP: 0010:blk_mq_put_rq_ref+0xa/0x60
[ +0.001535] Code: 15 0f b6 d3 4c 89 e7 be 01 00 00 00 e8 cf fe ff ff 5b 41 5c 5d c3 0f 0b 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 8b 47 10 <48> 8b 80 c0 00 00 00 48 89 e5 48 3b 78 40 74 1f 4c 8d 87 e8 00 00
[ +0.001269] RSP: 0018:ffffa5f7b3b17b08 EFLAGS: 00010206
[ +0.001331] RAX: 0000000000000000 RBX: ffffa5f7b3b17b88 RCX: 0000000000000002
[ +0.001452] RDX: 0000000000000001 RSI: 0000000000000206 RDI: ffff89c2118a6c00
[ +0.001285] RBP: ffffa5f7b3b17b40 R08: 0000000000000000 R09: 000000000000001a
[ +0.001230] R10: 0000000000000088 R11: 000000000000000f R12: ffff89c2118a6c00
[ +0.001201] R13: ffff89c211a54400 R14: 0000000000000000 R15: 0000000000000001
[ +0.001300] FS: 00007f40e9e2c700(0000) GS:ffff89e13fdc0000(0000) knlGS:0000000000000000
[ +0.001370] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ +0.001405] CR2: 00000000000000c0 CR3: 00000001303d0002 CR4: 00000000003726e0
[ +0.001297] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ +0.001325] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Oct28 16:54] libceph: osd3 down
[Oct28 17:04] libceph: osd3 weight 0x0 (out)
Ich habe dann heute Morgen versucht, die OSD tatsächlich zu stoppen mit systemctl stop ceph-osd@3
Daraufhin war die Konsole blockiert und das Journal hat nach einer Weile folgende Meldungen gezeigt:
Oct 29 08:31:01 vm-2 ceph-osd[3736838]: 2021-10-29T08:31:01.200+0200 7feaa5330700 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2021-10-29T07:31:01.204779+0200)
Oct 29 08:31:02 vm-2 ceph-osd[3736838]: 2021-10-29T08:31:02.200+0200 7feaa5330700 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2021-10-29T07:31:02.204942+0200)
Oct 29 08:31:02 vm-2 systemd[1]: Stopping Ceph object storage daemon osd.3...
Oct 29 08:31:02 vm-2 ceph-osd[3736838]: 2021-10-29T08:31:02.784+0200 7feab032a700 -1 received signal: Terminated from /sbin/init (PID: 1) UID: 0
Oct 29 08:31:02 vm-2 ceph-osd[3736838]: 2021-10-29T08:31:02.784+0200 7feab032a700 -1 osd.3 140824 *** Got signal Terminated ***
Oct 29 08:31:02 vm-2 ceph-osd[3736838]: 2021-10-29T08:31:02.784+0200 7feab032a700 -1 osd.3 140824 *** Immediate shutdown (osd_fast_shutdown=true) ***
Oct 29 08:31:03 vm-2 zabbix_agentd[2322]: no active checks on server [zabbix.langeoog.de:10051]: host [vm-2] not found
Oct 29 08:31:18 vm-2 sudo[1599072]: zabbix : PWD=/ ; USER=root ; COMMAND=/etc/zabbix/scripts/ceph-rec-speed.sh
Oct 29 08:31:18 vm-2 sudo[1599072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Oct 29 08:31:18 vm-2 sudo[1599072]: pam_unix(sudo:session): session closed for user root
Oct 29 08:31:48 vm-2 sudo[1599375]: zabbix : PWD=/ ; USER=root ; COMMAND=/etc/zabbix/scripts/ceph-rec-speed.sh
Oct 29 08:31:48 vm-2 sudo[1599375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Oct 29 08:31:49 vm-2 sudo[1599375]: pam_unix(sudo:session): session closed for user root
Oct 29 08:32:00 vm-2 systemd[1]: Starting Proxmox VE replication runner...
Oct 29 08:32:00 vm-2 systemd[1]: pvesr.service: Succeeded.
Oct 29 08:32:00 vm-2 systemd[1]: Finished Proxmox VE replication runner.
Oct 29 08:32:18 vm-2 sudo[1599655]: zabbix : PWD=/ ; USER=root ; COMMAND=/etc/zabbix/scripts/ceph-rec-speed.sh
Oct 29 08:32:18 vm-2 sudo[1599655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Oct 29 08:32:19 vm-2 sudo[1599655]: pam_unix(sudo:session): session closed for user root
Oct 29 08:32:32 vm-2 systemd[1]: ceph-osd@3.service: State 'stop-sigterm' timed out. Killing.
Oct 29 08:32:32 vm-2 systemd[1]: ceph-osd@3.service: Killing process 3736838 (ceph-osd) with signal SIGKILL.
Oct 29 08:32:32 vm-2 systemd[1]: ceph-osd@3.service: Killing process 3737306 (bstore_kv_sync) with signal SIGKILL.
Oct 29 08:32:48 vm-2 sudo[1599922]: zabbix : PWD=/ ; USER=root ; COMMAND=/etc/zabbix/scripts/ceph-rec-speed.sh
Oct 29 08:32:48 vm-2 sudo[1599922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Oct 29 08:32:49 vm-2 sudo[1599922]: pam_unix(sudo:session): session closed for user root
Wenn ich die OSD starten will gibts einen Fehler:
Oct 29 09:16:32 vm-2 systemd[1]: ceph-osd@3.service: Found left-over process 3736838 (ceph-osd) in control group while starting unit. Ignoring.
Oct 29 09:16:32 vm-2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Oct 29 09:16:32 vm-2 systemd[1]: Starting Ceph object storage daemon osd.3...
Oct 29 09:16:32 vm-2 systemd[1]: ceph-osd@3.service: Found left-over process 3736838 (ceph-osd) in control group while starting unit. Ignoring.
Oct 29 09:16:32 vm-2 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Jemand eine Idee, was da los ist?
 
Hi,
wie hast du denn geprüft das die OSD bzw. physische Festplatte nicht defekt ist?
Wir hatten schon mehrfach disks die nach SMART und Test augenscheinlich OK waren, aber eben doch irgendwelche "internen" Fehler hatten.
Da half dann nur ein Tausch.
 
Nach Slow Ops, hast du meistens eine defekte Disk. Meist machen die sich durch schlechte Performance bemerkbar, bevor SMART die aussortiert.
 
Also die SSD ist OK. Keine SMART Error und keine IO Error im Kernel Log. Die SSD ist auch weiterhin unter /dev ansprech- und lesbar. Schreiben habe ich natürlich nicht versucht, ich wollte ja die darauf enthaltenen Daten nicht beschädigen.

Nachdem ich den Server leerräumen konnte, habe ich ihn neu gebootet und die SSD kam ganz normal wieder in den Betrieb.

Es gab vom Kernel ja direkt vor dem Crash diese "Bug" Meldung: [Oct28 16:51] BUG: kernel NULL pointer dereference, address: 00000000000000c0
Daher bin ich eher der Meinung das da in der Software was passiert ist.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!