reducing CEPH Quincy loglevel (mgr / pgmap)

flames

Renowned Member
Feb 8, 2018
147
27
68
CH
Hello,

after upgrade to ceph quincy 17.2.5, the log spamming not going down by ceph tell mon.* injectargs '--mon_cluster_log_file_level info' + restart mons.

after reading ceph docs, quincy is now only supporting boolean true/false for the above config, i don't want to disable logging at all, just reducing it to info or warning level: https://docs.ceph.com/en/quincy/cephadm/operations/#enabling-logging-to-files

there is another option https://docs.ceph.com/en/quincy/mgr/modules/#logging
ceph config set mgr mgr/<module_name>/log_level <info|debug|critical|error|warning|>
but i am struggling to find the correct <module_name> with PVE 7.2-11 + ceph 17.2.5, module listing ceph mgr module ls shows me following:
Code:
MODULE                  
balancer           on (always on)
crash              on (always on)
devicehealth       on (always on)
orchestrator       on (always on)
pg_autoscaler      on (always on)
progress           on (always on)
rbd_support        on (always on)
status             on (always on)
telemetry          on (always on)
volumes            on (always on)
restful            on  
alerts             -    
influx             -    
insights           -    
iostat             -    
localpool          -    
mirroring          -    
nfs                -    
osd_perf_query     -    
osd_support        -    
prometheus         -    
selftest           -    
snap_schedule      -    
stats              -    
telegraf           -    
test_orchestrator  -    
zabbix             -

which module i can use for the above command?

thanks in advance!

trying to disable this messages:
2022-11-10T01:08:05.065144+0100 mgr.hostname (mgr.nnnnnnnnn) 219760 : cluster [DBG] pgmap v220340: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 741 KiB/s rd, 4.2 MiB/s wr, 395 op/s
2022-11-10T01:08:07.070394+0100 mgr.hostname (mgr.nnnnnnnnn) 219761 : cluster [DBG] pgmap v220341: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 3.2 MiB/s rd, 4.6 MiB/s wr, 453 op/s
2022-11-10T01:08:09.075315+0100 mgr.hostname (mgr.nnnnnnnnn) 219762 : cluster [DBG] pgmap v220342: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 559 op/s
2022-11-10T01:08:11.079020+0100 mgr.hostname (mgr.nnnnnnnnn) 219763 : cluster [DBG] pgmap v220343: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 8.9 MiB/s rd, 4.9 MiB/s wr, 544 op/s
2022-11-10T01:08:13.084022+0100 mgr.hostname (mgr.nnnnnnnnn) 219764 : cluster [DBG] pgmap v220344: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 9.3 MiB/s rd, 5.5 MiB/s wr, 606 op/s
2022-11-10T01:08:15.088037+0100 mgr.hostname (mgr.nnnnnnnnn) 219765 : cluster [DBG] pgmap v220345: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 9.3 MiB/s rd, 5.9 MiB/s wr, 593 op/s
2022-11-10T01:08:17.092030+0100 mgr.hostname (mgr.nnnnnnnnn) 219766 : cluster [DBG] pgmap v220346: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 8.7 MiB/s rd, 5.9 MiB/s wr, 574 op/s
2022-11-10T01:08:19.097360+0100 mgr.hostname (mgr.nnnnnnnnn) 219767 : cluster [DBG] pgmap v220347: 2081 pgs: 2081 active+clean; 10 TiB data, 28 TiB used, 25 TiB / 53 TiB avail; 6.5 MiB/s rd, 6.1 MiB/s wr, 562 op/s
 
Last edited:
  • Like
Reactions: abma
Hello flames,
Did you manage to find the exact module or something else.
For my case is the same. I keep getting DBG messages and I just want to reduce it to INFO but everything I try is not working.
Thank you.
 
As far as I know - the only option is to turn off file logging entirely - which is sad.
I don't understand how a thing like this could have slipped though testing in the ceph project - but its pretty annoying to look at the same useless messages every 2 second in the log - on top of the useless writes that the log system receives.
 
  • Like
Reactions: flames
Hello,
From my side no. As websmith says the only option is to entirely turn off logs which is basically not an option for me :)
 
Did you ever find a solution to reduce the ceph-mon logs?

I found that most of the writing to the disk is coming from the Ceph Monitors (ceph-mon) vs journald, now, I am trying to find a way to send them to memory or disable them or move them to RAM:

  • ceph-mon -f --cluster ceph --id N3 --setuser ceph --setgroup ceph [rocksdb:low]
  • ceph-mon -f --cluster ceph --id N3 --setuser ceph --setgroup ceph [ms_dispatch]
I see around 270-300KB/s written to the boot disk, mostly from ceph-mon, that's around 24GB/day and 10TB/year, just idle, you have to add all the additional VM/CT/OS workload when not idle, any idea how to address the Ceph logging? Thank you
 
No, i didn't find a solution. A workarround for journald maybe helps:

/etc/systemd/journald.conf:
Code:
[Journal]
Storage=volatile

This makes journald log to RAM only -> after a reboot the log is gone.
 
No, i didn't find a solution. A workarround for journald maybe helps:

/etc/systemd/journald.conf:
Code:
[Journal]
Storage=volatile

This makes journald log to RAM only -> after a reboot the log is gone.
Thanks, what I have done as well based on the Ceph documentation, is to move the Ceph logs to the Syslog (who are already in RAM) by editing /etc/ceph/ceph.conf and adding:

Code:
[global]
log_to_syslog = true
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!