Hi
After change HDD to SSD i have this messages
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:32:01 sr1 kernel: libceph: osd2 (1)192.168.10.203:6815 socket closed (con state OPEN)
Apr 24 17:32:32 sr1 kernel: libceph: osd8 (1)192.168.10.205:6801 socket closed (con state OPEN)
Apr 24 17:33:14 sr1 kernel: libceph: osd8 (1)192.168.10.205:6801 socket closed (con state OPEN)
Apr 24 17:33:14 sr1 kernel: libceph: osd8 (1)192.168.10.205:6801 socket closed (con state OPEN)
All nodes are same SSD model 2TB 4 per node.
My network is 10Gb
Have 3 nodes sr1(192.168.1.201),sr3(192.168.1.203),sr5(192.168.1.205)
Only one node sr1 log this.
But ceph say HEALTH_OK.
log of ceph is OK
2025-04-24T17:33:16.218178-0300 mgr.sr1 (mgr.52044469) 13820 : cluster 0 pgmap v13775: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 264 KiB/s rd, 1.5 MiB/s wr, 131 op/s
2025-04-24T17:33:18.218770-0300 mgr.sr1 (mgr.52044469) 13821 : cluster 0 pgmap v13776: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 200 KiB/s rd, 903 KiB/s wr, 80 op/s
2025-04-24T17:33:20.219084-0300 mgr.sr1 (mgr.52044469) 13822 : cluster 0 pgmap v13777: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 197 KiB/s rd, 811 KiB/s wr, 71 op/s
2025-04-24T17:33:22.220455-0300 mgr.sr1 (mgr.52044469) 13823 : cluster 0 pgmap v13778: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 207 KiB/s rd, 1.4 MiB/s wr, 97 op/s
2025-04-24T17:33:24.221292-0300 mgr.sr1 (mgr.52044469) 13824 : cluster 0 pgmap v13779: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 210 KiB/s rd, 1.2 MiB/s wr, 79 op/s
2025-04-24T17:33:26.222503-0300 mgr.sr1 (mgr.52044469) 13825 : cluster 0 pgmap v13780: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 251 KiB/s rd, 1.4 MiB/s wr, 104 op/s
2025-04-24T17:33:28.223255-0300 mgr.sr1 (mgr.52044469) 13826 : cluster 0 pgmap v13781: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 95 KiB/s rd, 1.3 MiB/s wr, 82 op/s
2025-04-24T17:33:30.223579-0300 mgr.sr1 (mgr.52044469) 13827 : cluster 0 pgmap v13782: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 76 op/s
2025-04-24T17:33:32.224737-0300 mgr.sr1 (mgr.52044469) 13828 : cluster 0 pgmap v13783: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 174 KiB/s rd, 1.7 MiB/s wr, 119 op/s
2025-04-24T17:33:34.225334-0300 mgr.sr1 (mgr.52044469) 13829 : cluster 0 pgmap v13784: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 178 KiB/s rd, 1.1 MiB/s wr, 99 op/s
2025-04-24T17:33:34.715448-0300 osd.8 (osd.8) 23 : cluster 0 2.1b6 scrub starts
2025-04-24T17:33:35.526752-0300 osd.8 (osd.8) 24 : cluster 0 2.1b6 scrub ok
2025-04-24T17:33:36.226474-0300 mgr.sr1 (mgr.52044469) 13830 : cluster 0 pgmap v13785: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 317 KiB/s rd, 1.5 MiB/s wr, 118 op/s
2025-04-24T17:33:38.227333-0300 mgr.sr1 (mgr.52044469) 13831 : cluster 0 pgmap v13786: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 2.7 MiB/s rd, 1.2 MiB/s wr, 119 op/s
2025-04-24T17:33:40.227652-0300 mgr.sr1 (mgr.52044469) 13832 : cluster 0 pgmap v13787: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 2.7 MiB/s rd, 1.0 MiB/s wr, 108 op/s
2025-04-24T17:33:42.228783-0300 mgr.sr1 (mgr.52044469) 13833 : cluster 0 pgmap v13788: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 50 MiB/s rd, 1.4 MiB/s wr, 553 op/s
2025-04-24T17:33:44.229340-0300 mgr.sr1 (mgr.52044469) 13834 : cluster 0 pgmap v13789: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 67 MiB/s rd, 1.0 MiB/s wr, 673 op/s
2025-04-24T17:33:46.230481-0300 mgr.sr1 (mgr.52044469) 13835 : cluster 0 pgmap v13790: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 131 MiB/s rd, 1.3 MiB/s wr, 1.22k op/s
2025-04-24T17:33:48.231091-0300 mgr.sr1 (mgr.52044469) 13836 : cluster 0 pgmap v13791: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 147 MiB/s rd, 1.0 MiB/s wr, 1.33k op/s
2025-04-24T17:33:50.231414-0300 mgr.sr1 (mgr.52044469) 13837 : cluster 0 pgmap v13792: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 144 MiB/s rd, 949 KiB/s wr, 1.30k op/s
2025-04-24T17:33:52.232571-0300 mgr.sr1 (mgr.52044469) 13838 : cluster 0 pgmap v13793: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 210 MiB/s rd, 1.5 MiB/s wr, 1.89k op/s
2025-04-24T17:33:54.233154-0300 mgr.sr1 (mgr.52044469) 13839 : cluster 0 pgmap v13794: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 180 MiB/s rd, 1.3 MiB/s wr, 1.60k op/s
Tk´s
After change HDD to SSD i have this messages
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:30:53 sr1 kernel: libceph: osd5 (1)192.168.10.201:6812 socket closed (con state OPEN)
Apr 24 17:32:01 sr1 kernel: libceph: osd2 (1)192.168.10.203:6815 socket closed (con state OPEN)
Apr 24 17:32:32 sr1 kernel: libceph: osd8 (1)192.168.10.205:6801 socket closed (con state OPEN)
Apr 24 17:33:14 sr1 kernel: libceph: osd8 (1)192.168.10.205:6801 socket closed (con state OPEN)
Apr 24 17:33:14 sr1 kernel: libceph: osd8 (1)192.168.10.205:6801 socket closed (con state OPEN)
All nodes are same SSD model 2TB 4 per node.
My network is 10Gb
Have 3 nodes sr1(192.168.1.201),sr3(192.168.1.203),sr5(192.168.1.205)
Only one node sr1 log this.
But ceph say HEALTH_OK.
log of ceph is OK
2025-04-24T17:33:16.218178-0300 mgr.sr1 (mgr.52044469) 13820 : cluster 0 pgmap v13775: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 264 KiB/s rd, 1.5 MiB/s wr, 131 op/s
2025-04-24T17:33:18.218770-0300 mgr.sr1 (mgr.52044469) 13821 : cluster 0 pgmap v13776: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 200 KiB/s rd, 903 KiB/s wr, 80 op/s
2025-04-24T17:33:20.219084-0300 mgr.sr1 (mgr.52044469) 13822 : cluster 0 pgmap v13777: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 197 KiB/s rd, 811 KiB/s wr, 71 op/s
2025-04-24T17:33:22.220455-0300 mgr.sr1 (mgr.52044469) 13823 : cluster 0 pgmap v13778: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 207 KiB/s rd, 1.4 MiB/s wr, 97 op/s
2025-04-24T17:33:24.221292-0300 mgr.sr1 (mgr.52044469) 13824 : cluster 0 pgmap v13779: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 210 KiB/s rd, 1.2 MiB/s wr, 79 op/s
2025-04-24T17:33:26.222503-0300 mgr.sr1 (mgr.52044469) 13825 : cluster 0 pgmap v13780: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 251 KiB/s rd, 1.4 MiB/s wr, 104 op/s
2025-04-24T17:33:28.223255-0300 mgr.sr1 (mgr.52044469) 13826 : cluster 0 pgmap v13781: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 95 KiB/s rd, 1.3 MiB/s wr, 82 op/s
2025-04-24T17:33:30.223579-0300 mgr.sr1 (mgr.52044469) 13827 : cluster 0 pgmap v13782: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 76 op/s
2025-04-24T17:33:32.224737-0300 mgr.sr1 (mgr.52044469) 13828 : cluster 0 pgmap v13783: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 174 KiB/s rd, 1.7 MiB/s wr, 119 op/s
2025-04-24T17:33:34.225334-0300 mgr.sr1 (mgr.52044469) 13829 : cluster 0 pgmap v13784: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 178 KiB/s rd, 1.1 MiB/s wr, 99 op/s
2025-04-24T17:33:34.715448-0300 osd.8 (osd.8) 23 : cluster 0 2.1b6 scrub starts
2025-04-24T17:33:35.526752-0300 osd.8 (osd.8) 24 : cluster 0 2.1b6 scrub ok
2025-04-24T17:33:36.226474-0300 mgr.sr1 (mgr.52044469) 13830 : cluster 0 pgmap v13785: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 317 KiB/s rd, 1.5 MiB/s wr, 118 op/s
2025-04-24T17:33:38.227333-0300 mgr.sr1 (mgr.52044469) 13831 : cluster 0 pgmap v13786: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 2.7 MiB/s rd, 1.2 MiB/s wr, 119 op/s
2025-04-24T17:33:40.227652-0300 mgr.sr1 (mgr.52044469) 13832 : cluster 0 pgmap v13787: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 2.7 MiB/s rd, 1.0 MiB/s wr, 108 op/s
2025-04-24T17:33:42.228783-0300 mgr.sr1 (mgr.52044469) 13833 : cluster 0 pgmap v13788: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 50 MiB/s rd, 1.4 MiB/s wr, 553 op/s
2025-04-24T17:33:44.229340-0300 mgr.sr1 (mgr.52044469) 13834 : cluster 0 pgmap v13789: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 67 MiB/s rd, 1.0 MiB/s wr, 673 op/s
2025-04-24T17:33:46.230481-0300 mgr.sr1 (mgr.52044469) 13835 : cluster 0 pgmap v13790: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 131 MiB/s rd, 1.3 MiB/s wr, 1.22k op/s
2025-04-24T17:33:48.231091-0300 mgr.sr1 (mgr.52044469) 13836 : cluster 0 pgmap v13791: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 147 MiB/s rd, 1.0 MiB/s wr, 1.33k op/s
2025-04-24T17:33:50.231414-0300 mgr.sr1 (mgr.52044469) 13837 : cluster 0 pgmap v13792: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 144 MiB/s rd, 949 KiB/s wr, 1.30k op/s
2025-04-24T17:33:52.232571-0300 mgr.sr1 (mgr.52044469) 13838 : cluster 0 pgmap v13793: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 210 MiB/s rd, 1.5 MiB/s wr, 1.89k op/s
2025-04-24T17:33:54.233154-0300 mgr.sr1 (mgr.52044469) 13839 : cluster 0 pgmap v13794: 513 pgs: 513 active+clean; 1.6 TiB data, 4.6 TiB used, 17 TiB / 22 TiB avail; 180 MiB/s rd, 1.3 MiB/s wr, 1.60k op/s
Tk´s
Last edited: