about ceph config error

haiwan

Well-Known Member
Apr 23, 2019
245
1
58
36
2019-04-28 02:00:05.835603 mgr.pve140 client.24111 192.168.100.140:0/3689608854 71703 : cluster [DBG] pgmap v71690: 256 pgs: 26 active+undersized+remapped, 230 undersized+peered; 0B data, 5.04GiB used, 27.3TiB / 27.3TiB avail
2019-04-28 02:00:07.855779 mgr.pve140 client.24111 192.168.100.140:0/3689608854 71704 : cluster [DBG] pgmap v71691: 256 pgs: 26 active+undersized+remapped, 230 undersized+peered; 0B data, 5.04GiB used, 27.3TiB / 27.3TiB avail
2019-04-28 02:00:09.875527 mgr.pve140 client.24111 192.168.100.140:0/3689608854 71705 : cluster [DBG] pgmap v71692: 256 pgs: 26 active+undersized+remapped, 230 undersized+peered; 0B data, 5.04GiB used, 27.3TiB / 27.3TiB avail
2019-04-28 02:00:11.895626 mgr.pve140 client.24111 192.168.100.140:0/3689608854 71706 : cluster [DBG] pgmap v71693: 256 pgs: 26 active+undersized+remapped, 230 undersized+peered; 0B data, 5.04GiB used, 27.3TiB / 27.3TiB avail

we creat new pool
later happen this is error .
please help me point right way
thanks
4 server
every server have 5*6TB sas
 
Last edited:
image
 

Attachments

  • QQ截图20190428020143.jpg
    QQ截图20190428020143.jpg
    28.5 KB · Views: 8
  • QQ截图20190428020209.jpg
    QQ截图20190428020209.jpg
    38.1 KB · Views: 8
Did the remapping finish? Can you post a 'ceph osd tree' output?

EDIT: I found they why this message is shown.

This is expected behaviour, see issue.
https://tracker.ceph.com/issues/37886

If you don't want this information, then you would need to change the the clust log file level to 'info' in the ceph.conf and restart the MONs (deafult: debug).

Code:
ceph daemon mon.a config show | grep mon_cluster_log_file_level
 
Last edited:
Did the remapping finish? Can you post a 'ceph osd tree' output?

EDIT: I found they why this message is shown.

This is expected behaviour, see issue.
https://tracker.ceph.com/issues/37886

If you don't want this information, then you would need to change the the clust log file level to 'info' in the ceph.conf and restart the MONs (deafult: debug).

Code:
ceph daemon mon.a config show | grep mon_cluster_log_file_level
root@pve140:~# ceph daemon mon.a config show | grep mon_cluster_log_file_level
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
root@pve140:~#
 
Did the remapping finish? Can you post a 'ceph osd tree' output?

EDIT: I found they why this message is shown.

This is expected behaviour, see issue.
https://tracker.ceph.com/issues/37886

If you don't want this information, then you would need to change the the clust log file level to 'info' in the ceph.conf and restart the MONs (deafult: debug).

Code:
ceph daemon mon.a config show | grep mon_cluster_log_file_level
root@pve140:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 27.28943 root default
-3 27.28943 host pve140
0 hdd 5.45789 osd.0 up 1.00000 1.00000
1 hdd 5.45789 osd.1 up 1.00000 1.00000
2 hdd 5.45789 osd.2 up 1.00000 1.00000
3 hdd 5.45789 osd.3 up 1.00000 1.00000
4 hdd 5.45789 osd.4 up 1.00000 1.00000
root@pve140:~#
 
Did you replace mon.a with your Hostname? Is on this node an Mon/Mgr running and have the CEPH config, so you can view the CEPH stats in PVE GUI?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!