Ceph - strange behavior

magnusek

New Member
Apr 26, 2019
1
0
1
44
Hi,
After the last update, ceph started to produce a large number of logs.
Do I also have this problem with you?

Previously, in the logs I saw only this information
cluster [INF] overall HEALTH_OK

now looks like it was debug enabled?
Code:
2019-04-26 15:59:52.814310 mgr.alpha client.1127169 192.168.0.20:0/2567037334 167 : cluster [DBG] pgmap v168: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 2.97KiB/s wr, 0op/s
2019-04-26 15:59:54.835226 mgr.alpha client.1127169 192.168.0.20:0/2567037334 168 : cluster [DBG] pgmap v169: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 3.96KiB/s wr, 0op/s
2019-04-26 15:59:56.854476 mgr.alpha client.1127169 192.168.0.20:0/2567037334 169 : cluster [DBG] pgmap v170: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:00.000219 mon.alpha mon.0 192.168.0.20:6789/0 861 : cluster [INF] overall HEALTH_OK
2019-04-26 15:59:58.875114 mgr.alpha client.1127169 192.168.0.20:0/2567037334 170 : cluster [DBG] pgmap v171: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 2.97KiB/s wr, 0op/s
2019-04-26 16:00:00.875887 mgr.alpha client.1127169 192.168.0.20:0/2567037334 171 : cluster [DBG] pgmap v172: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:02.894433 mgr.alpha client.1127169 192.168.0.20:0/2567037334 172 : cluster [DBG] pgmap v173: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:04.919250 mgr.alpha client.1127169 192.168.0.20:0/2567037334 173 : cluster [DBG] pgmap v174: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:06.938398 mgr.alpha client.1127169 192.168.0.20:0/2567037334 174 : cluster [DBG] pgmap v175: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1015B/s wr, 0op/s
2019-04-26 16:00:08.959108 mgr.alpha client.1127169 192.168.0.20:0/2567037334 175 : cluster [DBG] pgmap v176: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 2.97KiB/s wr, 0op/s
2019-04-26 16:00:10.978437 mgr.alpha client.1127169 192.168.0.20:0/2567037334 176 : cluster [DBG] pgmap v177: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:12.998451 mgr.alpha client.1127169 192.168.0.20:0/2567037334 177 : cluster [DBG] pgmap v178: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:15.019236 mgr.alpha client.1127169 192.168.0.20:0/2567037334 178 : cluster [DBG] pgmap v179: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 3.96KiB/s wr, 0op/s
2019-04-26 16:00:17.038521 mgr.alpha client.1127169 192.168.0.20:0/2567037334 179 : cluster [DBG] pgmap v180: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 3.96KiB/s wr, 0op/s
2019-04-26 16:00:19.059173 mgr.alpha client.1127169 192.168.0.20:0/2567037334 180 : cluster [DBG] pgmap v181: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 4.95KiB/s wr, 0op/s
2019-04-26 16:00:21.078370 mgr.alpha client.1127169 192.168.0.20:0/2567037334 181 : cluster [DBG] pgmap v182: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 2.97KiB/s wr, 0op/s
2019-04-26 16:00:23.098360 mgr.alpha client.1127169 192.168.0.20:0/2567037334 182 : cluster [DBG] pgmap v183: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 2.97KiB/s wr, 0op/s
2019-04-26 16:00:25.119320 mgr.alpha client.1127169 192.168.0.20:0/2567037334 183 : cluster [DBG] pgmap v184: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 3.96KiB/s wr, 0op/s
2019-04-26 16:00:27.138410 mgr.alpha client.1127169 192.168.0.20:0/2567037334 184 : cluster [DBG] pgmap v185: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:29.159172 mgr.alpha client.1127169 192.168.0.20:0/2567037334 185 : cluster [DBG] pgmap v186: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1.98KiB/s wr, 0op/s
2019-04-26 16:00:31.178478 mgr.alpha client.1127169 192.168.0.20:0/2567037334 186 : cluster [DBG] pgmap v187: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1013B/s wr, 0op/s
2019-04-26 16:00:33.198291 mgr.alpha client.1127169 192.168.0.20:0/2567037334 187 : cluster [DBG] pgmap v188: 296 pgs: 296 active+clean; 46.7GiB data, 144GiB used, 1.22TiB / 1.36TiB avail; 1013B/s wr, 0op/s
 
This is expected behaviour, see issue.
https://tracker.ceph.com/issues/37886

If you don't want this information, then you would need to change the the cluster log file level to 'info' in the ceph.conf and restart the MONs (deafult: debug).
Code:
ceph daemon mon.a config show | grep mon_cluster_log_file_level
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!