Ignoring custom ceph config for storage

Timothy1056

New Member
Aug 18, 2022
15
0
1
Good day I was wondering how to get rid of this error.

Jun 28 14:37:01 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:37:12 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:37:21 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:37:31 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:37:41 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:37:51 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:38:01 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:38:11 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:38:21 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:38:31 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)! Jun 28 14:38:41 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)!


Currently everything is operating as normal.
 
Please post your ceph.conf cat /etc/pve/ceph.conf and also the output of ceph health
 
CAT /etc/pve/ceph.conf
Code:
cat /etc/pve/ceph.conf
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 192.168.50.11/24
         fsid = 7e77370a-73da-4441-a059-8337ba12eff3
         mon_allow_pool_delete = true
         mon_host = 192.168.50.29 192.168.50.19 192.168.50.8 192.168.50.23
         ms_bind_ipv4 = true
         ms_bind_ipv6 = false
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 192.168.50.11/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mon.pve13]
         public_addr = 192.168.50.23

[mon.pve18]
         public_addr = 192.168.50.29

[mon.pve19]
         public_addr = 192.168.50.19

[mon.pve8]
         public_addr = 192.168.50.8

Ceph Health - I am aware of the one Mons being down.

Code:
ceph health
HEALTH_WARN 1/4 mons down, quorum pve18,pve8,pve13; all OSDs are running quincy or later but require_osd_release < quincy
 
Does that config look the same on all of your nodes?
 
Sorry for such a delay response.

I have confirmed that all the of the config files are the same in /etc/pve/ceph.confScreenshot from 2024-02-02 14-24-07.png
 
First of all, the note that you shouldn't deploy 4 Mons. Either 3, 5 or a maximum of 7. In your setup you will easily get by with 3. Larger setups with several hundred OSDs only need 5 and 7 in very special cases.

Do you only have the error message on pve13? What does your /etc/pve/storage.cfg look like? Do you have anything left under /etc/ceph?
 
Hi @sb-jw I have reduced the Monitors to 3.

Please find attached storage.cfg

For the /etc/ceph directory there are two file in there one named ceph.conf & rbdmap
 

Attachments

  • storage.txt
    2.1 KB · Views: 3
  • ceph.txt
    839 bytes · Views: 2
  • rbdmap.txt
    108 bytes · Views: 1
Do you only have the error message on pve13?
the above question and does the rbdmap file exist on all servers? You don't actually need that and you can delete it; that might solve the error.
 
This error appears on all the servers and the rbdmap file is located on every server as well.

I have noticed that the ceph.conf file is missing on PVE7, PVE10 and PVE11
 
I have noticed that the ceph.conf file is missing on PVE7, PVE10 and PVE11
The one under /etc/ceph/ceph.conf (actually just a symlink) or /etc/pve/ceph.conf? If the last one, you should urgently check the nodes in question, because the folder should be replicated to all of them and should not differ.
 
Code:
ceph health
HEALTH_WARN 1/4 mons down, quorum pve18,pve8,pve13; all OSDs are running quincy or later but require_osd_release < quincy
Did you fix this? If you do not update the osd version for 2 or more releases, you can run into quite severe issues. Here is the wiki to this topic.

Also, before you do this, make sure you have a working backup (just in case).
 
Hello

The issue is not that the OSDs are not upgraded. The issue is that your system is still set to allow an older OSD version than you are using. CEPH guarantees only compatibility for 2 versions older than you are using. Having a bigger difference to the required release most likely will corrupt your OSDs and force you to restore from backup.
 
When running
Code:
ceph mon dump | grep min_mon_release
I get the following output
Code:
min_mon_release 17 (quincy)

Based on the documentation you sent me in your previous message, The setting to only allow Quincy is set And the error is still present
 

Attachments

  • Screenshot from 2024-02-05 14-38-48.png
    Screenshot from 2024-02-05 14-38-48.png
    574 KB · Views: 2
  • Screenshot from 2024-02-05 14-39-09.png
    Screenshot from 2024-02-05 14-39-09.png
    64.8 KB · Views: 2
Hey

Did you add the CEPH storage as a local or as a remote storage?

As far as I can currently tell, the error you are receiving should only be possible if you add an external ceph cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!