OSD socket closed

samumsq

New Member
Aug 27, 2025
11
1
3
Hi everyone,

Recently i had a problem with my pve cluster and while looking through logs we find this kind of logs in every node

Aug 29 10:44:26 pve01-poz kernel: [697358.187283] libceph: osd17 (1)10.0.0.3:6810 socket closed (con state OPEN)
Aug 29 10:45:43 pve01-poz kernel: [697434.733343] libceph: osd1 (1)10.0.0.1:6837 socket closed (con state OPEN)
Aug 29 10:46:54 pve01-poz kernel: [697506.414196] libceph: osd15 (1)10.0.0.1:6836 socket closed (con state OPEN)
Aug 29 10:47:56 pve01-poz kernel: [697567.855454] libceph: osd4 (1)10.0.0.1:6802 socket closed (con state OPEN)
Aug 29 10:49:08 pve01-poz kernel: [697639.497740] libceph: osd4 (1)10.0.0.1:6802 socket closed (con state OPEN)
Aug 29 10:51:50 pve01-poz kernel: [697802.060504] libceph: osd12 (1)10.0.0.3:6811 socket closed (con state OPEN)
Aug 29 10:55:37 pve01-poz kernel: [698028.664214] libceph: osd1 (1)10.0.0.1:6837 socket closed (con state OPEN)

The problem we had was with corosync that is using another network and other ethernet ports. But we'd like to know if this logs are normal and it's just part of how Ceph work or if we should dig deep to check what is causing this logs

Thank you in advance for your time and help
 
Hi,

Aug 29 10:51:50 pve01-poz kernel: [697802.060504] libceph: osd12 (1)10.0.0.3:6811 socket closed (con state OPEN)
It looks like Ceph client (Proxmox node) had an open TCP connection to an OSD (Object Storage Daemon) that was unexpectedly closed.
It can be normal connection lifecycle when Ceph close and reopen sockets during rebalancing, recovery or network instability.
Check ceph health to be sure
Code:
ceph -s
Code:
ceph health detail
 
Last edited: