Hello,
I've recently built out a new PVE8 cluster and set up ceph - I've noticed that on rebooting a node, it seems to generate a ceph crash file, and then fails to process it
Aside from those logs, there are actually no warnings in ceph
Each node seems to have these, but they are older from previous reboots
I've recently built out a new PVE8 cluster and set up ceph - I've noticed that on rebooting a node, it seems to generate a ceph crash file, and then fails to process it
Code:
2023-07-12T13:50:06.996554+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.crash.pve1 failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
2023-07-12T13:50:07.163420+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.crash failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
2023-07-12T13:50:07.330608+01:00 pve1 ceph-crash[1904]: WARNING:ceph-crash:post /var/lib/ceph/crash/2023-07-12T11:30:22.343531Z_1ed0aca4-3c7c-41b9-b3eb-50afccbf39b5 as client.admin failed: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
Aside from those logs, there are actually no warnings in ceph
Code:
ceph status
cluster:
id: 7b52e1a5-25f0-4c39-a341-fc90ae5b6dc0
health: HEALTH_OK
services:
mon: 3 daemons, quorum pve1,pve2,pve3 (age 83m)
mgr: pve2(active, since 2d), standbys: pve3, pve1
mds: 1/1 daemons up, 2 standby
osd: 3 osds: 3 up (since 82m), 3 in (since 2d)
data:
volumes: 1/1 healthy
pools: 4 pools, 73 pgs
objects: 21.50k objects, 80 GiB
usage: 121 GiB used, 1.2 TiB / 1.4 TiB avail
pgs: 73 active+clean
io:
client: 484 KiB/s wr, 0 op/s rd, 34 op/s wr
Each node seems to have these, but they are older from previous reboots