Proxmox with Ceph - Problems showing the osds in the gui

fxandrei

Renowned Member
Jan 10, 2013
163
17
83
So i just made a new cluster with 6.2 .
I installed ceph and configured everything.
Everything went smooth as butter :) .
But im seeing a problem when i click Ceph-OSD on any node. The problem is that the osds dont show up immediately everytime.
A lot of times i click there and its empty. I click reload and still nothing. I usually have to wait quite a few seconds and maybe click on something else and then come back.

I have tried executing ceph osd tree in the console and it works every time.
Im not sure what is happening and where to check.
I looked at some logs and did not find any problems.

Has anyone else encountered this ?
 
Can all nodes in the cluster reach the MON? Basically every node that needs to have access to ceph needs to be able to communicate through Ceph's public_network (see ceph.conf).
 
Well, yes. If i execute ceph osd tree from the shell i get the output right away.
Yes, but from all nodes in the cluster? The request on the GUI is redirected to the node you are using for the Ceph OSD tab to display its data.
 
Yes, they are all in the cluster. Its really strange . If somehow there would be a problem of contacting the nodes, where should i look ? In what log ?
 
The syslog is always a good stop and also the ceph logs.
 
Im having the same problem - most of the times on first node they show up. but second and third node not. Sometimes they are showing up, if Im switching between hosts. I did:

  • run "ceph osd tree" on every node -> works
  • restarted all ceph mons one after another
  • checked journalctl after pressing "reload" in webgui -> cant see any error
  • checked /etc/hosts - lookup works.
  • ceph osd crush ls pve-01 (pve-02 and pve-03) works
Code:
[global]
     auth_client_required = cephx
     auth_cluster_required = cephx
     auth_service_required = cephx
     cluster_network = 10.6.6.221/24
     fsid = aeb02f58-8b3d-4b5d-8cf2-a16af4959bbf
     mon_allow_pool_delete = true
     mon_host = 10.5.5.221 10.5.5.222 10.5.5.223
     osd_pool_default_min_size = 2
     osd_pool_default_size = 3
     public_network = 10.5.5.221/24

[client]
     keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.pve-01]
     public_addr = 10.5.5.221

[mon.pve-02]
     public_addr = 10.5.5.222

[mon.pve-03]
     public_addr = 10.5.5.223
 
Last edited:
did you solve this?

I never had this after I always checked I:
  • make sure that disks are wiped and initalized with GPT BEFORE I create them as osds
    • EVEN they are GPT "NO" after osd creation because of ceph lvm
  • have correct network-time (check chrony!)
  • most of the times when I had gui problems regarding ceph it is a network problem (time or connectivity)
  • sometimes you cant even install ceph when times to offsync
 
  • Like
Reactions: itNGO

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!