Ceph-crash post warning

tsumaru720

Well-Known Member
May 1, 2016
66
2
48
44
Hello,

I've recently built out a new PVE8 cluster and set up ceph - I've noticed that on rebooting a node, it seems to generate a ceph crash file, and then fails to process it

Code:
2023-07-12T13:50:06.996554+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.crash.pve1 failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
2023-07-12T13:50:07.163420+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.crash failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
2023-07-12T13:50:07.330608+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.admin failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')

Aside from those logs, there are actually no warnings in ceph

Code:
ceph status
  cluster:
    id:     7b52e1a5-25f0-4c39-a341-fc90ae5b6dc0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum pve1,pve2,pve3 (age 83m)
    mgr: pve2(active, since 2d), standbys: pve3, pve1
    mds: 1/1 daemons up, 2 standby
    osd: 3 osds: 3 up (since 82m), 3 in (since 2d)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 73 pgs
    objects: 21.50k objects, 80 GiB
    usage:   121 GiB used, 1.2 TiB / 1.4 TiB avail
    pgs:     73 active+clean
 
  io:
    client:   484 KiB/s wr, 0 op/s rd, 34 op/s wr

Each node seems to have these, but they are older from previous reboots
 
Are the errors still occurring ? Is ceph properly installed on all devices?
 
Are the errors still occurring ? Is ceph properly installed on all devices?
I used the PVE GUI to install ceph, but as far as i can tell it is set up and working correctly on all my nodes - ceph status shows all is ok

The "crash" file was generated at the time of system boot, but the syslog errors seem to be something trying to do something with it.

Looking at the actual contents of the crash files, it might be a leftover from when i tried to set up ceph dashboard
Code:
    "backtrace": [
        "  File \"/usr/share/ceph/mgr/restful/__init__.py\", line 1, in <module>\n    from .module import Module",
        "  File \"/usr/share/ceph/mgr/restful/module.py\", line 21, in <module>\n    from OpenSSL import crypto",
        "  File \"/lib/python3/dist-packages/OpenSSL/__init__.py\", line 8, in <module>\n    from OpenSSL import SSL, crypto",
        "  File \"/lib/python3/dist-packages/OpenSSL/SSL.py\", line 19, in <module>\n    from OpenSSL.crypto import (",
        "  File \"/lib/python3/dist-packages/OpenSSL/crypto.py\", line 21, in <module>\n    from cryptography import utils, x509",
        "  File \"/lib/python3/dist-packages/cryptography/x509/__init__.py\", line 6, in <module>\n    from cryptography.x509 import certificate_transparency",
        "  File \"/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py\", line 10, in <module>\n    from cryptography.hazmat.bindings._rust import x509 as rust_x509",
        "ImportError: PyO3 modules may only be initialized once per interpreter process"
    ],
    "mgr_module": "restful",
    "mgr_module_caller": "PyModule::load_subclass_of",
    "mgr_python_exception": "ImportError"

I do have the "resful" module disabled
 
Hello,

I've recently built out a new PVE8 cluster and set up ceph - I've noticed that on rebooting a node, it seems to generate a ceph crash file, and then fails to process it

Code:
2023-07-12T13:50:06.996554+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.crash.pve1 failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
2023-07-12T13:50:07.163420+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.crash failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
2023-07-12T13:50:07.330608+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.admin failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
i just updated one node and got since the update the same messages

Aug 05 11:06:39 hvirt04 ceph-crash[994]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-08-05T07:08:54.517493Z_4ceb66b6-dae6-4761-9a68-febd5711a402 as client.crash.hvirt04 failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
Aug 05 11:06:39 hvirt04 ceph-crash[994]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-08-05T07:08:54.517493Z_4ceb66b6-dae6-4761-9a68-febd5711a402 as client.crash failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
Aug 05 11:06:39 hvirt04 ceph-crash[994]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-08-05T07:08:54.517493Z_4ceb66b6-dae6-4761-9a68-febd5711a402 as client.admin failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!