CEPH-Log DBG messages - why?

Ronny

Member
Sep 12, 2017
41
0
6
35
Hello,

on our new 3-node cluster with fresh ceph-cluster installation we get continously this messages on all 3 nodes in ceph-log.

pveversion: pve-manager/5.4-4/97a96833 (running kernel: 4.15.18-12-pve)
the cluster contains this hosts:
pve-hp-01 (7 OSDs)
pve-hp-02 (7 OSDs)
pve-hp-03 (8 OSDs)

but the message says allways ...pve-hp-01... ?

any suggestions?


2019-04-26 15:54:12.295491 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7453 : cluster [DBG] pgmap v7509: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:14.315197 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7454 : cluster [DBG] pgmap v7510: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:16.335828 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7455 : cluster [DBG] pgmap v7511: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:18.355260 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7456 : cluster [DBG] pgmap v7512: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:20.375540 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7457 : cluster [DBG] pgmap v7513: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:22.395502 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7458 : cluster [DBG] pgmap v7514: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:24.415556 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7459 : cluster [DBG] pgmap v7515: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,814
246
63

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,814
246
63
After reading a bit more it's actually, that you would need to change the the cluster log file level to 'info' in the ceph.conf and restart the MONs (deafult: debug).

Code:
ceph daemon mon.a config show | grep mon_cluster_log_file_level
 

rdrl

New Member
Jan 16, 2019
3
2
3
51
any suggestions?
After updating to version 5.4, I also began to see similar messages in the log.
2019-04-24 18:22:50.240762 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1639 : cluster [DBG] pgmap v1642: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 1.56KiB/s rd, 138KiB/s wr, 25op/s
2019-04-24 18:22:52.267237 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1640 : cluster [DBG] pgmap v1643: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 2.55KiB/s rd, 183KiB/s wr, 27op/s
2019-04-24 18:22:54.288789 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1641 : cluster [DBG] pgmap v1644: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 2.88KiB/s rd, 151KiB/s wr, 24op/s
2019-04-24 18:22:56.309446 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1642 : cluster [DBG] pgmap v1645: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 2.88KiB/s rd, 145KiB/s wr, 23op/s
2019-04-24 18:22:58.339352 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1643 : cluster [DBG] pgmap v1646: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 10.8KiB/s rd, 207KiB/s wr, 33op/s


They weren't there before. I checked with the command each node

ceph daemon mon.NODE config get mon_cluster_log_file_level

the result was
{
"mon_cluster_log_file_level": "debug"
}

After that, I gave the command on each node

ceph daemon mon.NODE config set mon_cluster_log_file_level info

result
{
"mon_cluster_log_file_level": "info"
}

after that the log became as before without "debug" messages ...
 
  • Like
Reactions: RokaKen and Alwin

Whatever

Member
Nov 19, 2012
228
6
18
Is this solution persistent on reboot? Or should I put something to ceph.conf?
Thanks
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,814
246
63
@Whatever
Code:
[global]
mon_cluster_log_file_level = info
or in the general '[mon]' section.
 
  • Like
Reactions: rdrl

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!