CEPH-Log DBG messages - why?

Ronny

Well-Known Member
Sep 12, 2017
59
3
48
40
Hello,

on our new 3-node cluster with fresh ceph-cluster installation we get continously this messages on all 3 nodes in ceph-log.

pveversion: pve-manager/5.4-4/97a96833 (running kernel: 4.15.18-12-pve)
the cluster contains this hosts:
pve-hp-01 (7 OSDs)
pve-hp-02 (7 OSDs)
pve-hp-03 (8 OSDs)

but the message says allways ...pve-hp-01... ?

any suggestions?


2019-04-26 15:54:12.295491 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7453 : cluster [DBG] pgmap v7509: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:14.315197 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7454 : cluster [DBG] pgmap v7510: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:16.335828 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7455 : cluster [DBG] pgmap v7511: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:18.355260 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7456 : cluster [DBG] pgmap v7512: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:20.375540 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7457 : cluster [DBG] pgmap v7513: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:22.395502 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7458 : cluster [DBG] pgmap v7514: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:24.415556 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7459 : cluster [DBG] pgmap v7515: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
 
After reading a bit more it's actually, that you would need to change the the cluster log file level to 'info' in the ceph.conf and restart the MONs (deafult: debug).

Code:
ceph daemon mon.a config show | grep mon_cluster_log_file_level
 
any suggestions?

After updating to version 5.4, I also began to see similar messages in the log.
2019-04-24 18:22:50.240762 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1639 : cluster [DBG] pgmap v1642: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 1.56KiB/s rd, 138KiB/s wr, 25op/s
2019-04-24 18:22:52.267237 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1640 : cluster [DBG] pgmap v1643: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 2.55KiB/s rd, 183KiB/s wr, 27op/s
2019-04-24 18:22:54.288789 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1641 : cluster [DBG] pgmap v1644: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 2.88KiB/s rd, 151KiB/s wr, 24op/s
2019-04-24 18:22:56.309446 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1642 : cluster [DBG] pgmap v1645: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 2.88KiB/s rd, 145KiB/s wr, 23op/s
2019-04-24 18:22:58.339352 mgr.st02 client.37617344 172.31.67.202:0/4110220472 1643 : cluster [DBG] pgmap v1646: 1184 pgs: 1184 active+clean; 702GiB data, 2.06TiB used, 12.7TiB / 14.7TiB avail; 10.8KiB/s rd, 207KiB/s wr, 33op/s


They weren't there before. I checked with the command each node

ceph daemon mon.NODE config get mon_cluster_log_file_level

the result was
{
"mon_cluster_log_file_level": "debug"
}

After that, I gave the command on each node

ceph daemon mon.NODE config set mon_cluster_log_file_level info

result
{
"mon_cluster_log_file_level": "info"
}

after that the log became as before without "debug" messages ...
 
  • Like
Reactions: RokaKen and Alwin
Is this solution persistent on reboot? Or should I put something to ceph.conf?
Thanks
 
@Whatever
Code:
[global]
mon_cluster_log_file_level = info
or in the general '[mon]' section.
 
  • Like
Reactions: rdrl
I noticed this as well on a new cluster (PVE 6.3-6), but there are two ceph.conf files:
/etc/ceph/ceph.conf and /etc/pve/ceph.conf

The /etc/pve already had "mon_cluster_log_file_level = info", but the /etc/ceph did not and after I added that to the global config section and restarted the monitor, it immediately quieted down the log output.

Why are there two ceph.conf files? Why don't they have the same contents?
Thanks,

Triston
 
In the clusters I've installed and, AFAIK, the current intent is that /etc/ceph/ceph.conf is a symbolic link to /etc/pve/ceph.conf which would make the effective contents the same. It also results the in file being the same across all nodes.

I won't speculate as to how or why your installation did not create it as a symbolic link.
 
  • Like
Reactions: Tmanok
In the clusters I've installed and, AFAIK, the current intent is that /etc/ceph/ceph.conf is a symbolic link to /etc/pve/ceph.conf which would make the effective contents the same. It also results the in file being the same across all nodes.

I won't speculate as to how or why your installation did not create it as a symbolic link.
Hey Rokaken,

Is it advisable to symbolically link them then? Which one should be the link and which should be the original file? I'm a little nervous about permissions and about how Proxmox may try to over-write these files in the future.

Thanks for any advice,

Triston
 
/etc/pve/ceph.conf is the data file and /etc/ceph/ceph.conf is linked to it:

Code:
# ls -lh /etc/pve/ceph.conf
-rw-r----- 1 root www-data 1.2K Jul  6  2020 /etc/pve/ceph.conf

# ls -lh /etc/ceph/ceph.conf
lrwxrwxrwx 1 root root 18 Dec  5  2019 /etc/ceph/ceph.conf -> /etc/pve/ceph.conf
 
A question, is this still the right way to do it in ceph v17 ? reason for asking is that i tried adding mon_cluster_log_file_level = info to ceph.conf under [global] and rebooted but i still have all the spam in the log about pgmap
 
  • Like
Reactions: flames
Hello Everyone,
On my 3 clusters with CEPH 17.2.5 is the same. I try everything I can find and nothing works. I keep getting DBG messages about pgmap.
Any suggestions ?
Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!