Hi everyone,
Recently i had a problem with my pve cluster and while looking through logs we find this kind of logs in every node
The problem we had was with corosync that is using another network and other ethernet ports. But we'd like to know if this logs are normal and it's just part of how Ceph work or if we should dig deep to check what is causing this logs
Thank you in advance for your time and help
Recently i had a problem with my pve cluster and while looking through logs we find this kind of logs in every node
Aug 29 10:44:26 pve01-poz kernel: [697358.187283] libceph: osd17 (1)10.0.0.3:6810 socket closed (con state OPEN) Aug 29 10:45:43 pve01-poz kernel: [697434.733343] libceph: osd1 (1)10.0.0.1:6837 socket closed (con state OPEN) Aug 29 10:46:54 pve01-poz kernel: [697506.414196] libceph: osd15 (1)10.0.0.1:6836 socket closed (con state OPEN) Aug 29 10:47:56 pve01-poz kernel: [697567.855454] libceph: osd4 (1)10.0.0.1:6802 socket closed (con state OPEN) Aug 29 10:49:08 pve01-poz kernel: [697639.497740] libceph: osd4 (1)10.0.0.1:6802 socket closed (con state OPEN) Aug 29 10:51:50 pve01-poz kernel: [697802.060504] libceph: osd12 (1)10.0.0.3:6811 socket closed (con state OPEN) Aug 29 10:55:37 pve01-poz kernel: [698028.664214] libceph: osd1 (1)10.0.0.1:6837 socket closed (con state OPEN) |
The problem we had was with corosync that is using another network and other ethernet ports. But we'd like to know if this logs are normal and it's just part of how Ceph work or if we should dig deep to check what is causing this logs
Thank you in advance for your time and help