[CEPH] auth: unable to find a keyring on /etc/pve/priv/ceph.client.admin.keyring: (13) Permission denied

Hi @Max Carrara,

Code:
pve-manager/8.2.4/faa83925c9641325 (running kernel: 6.8.8-2-pve)

Code:
ceph version 18.2.2 (e9fe820e7fffd1b7cde143a9f77653b73fcec748) reef (stable)

Code:
ls -alhu /etc/pve/ceph
total 512
drwxr-xr-x 2 root www-data  0 Jun 19 15:26 .
drwxr-xr-x 2 root www-data  0 Jan  1  1970 ..
-rw-r----- 1 root www-data 63 Jun 19 15:26 ceph.client.crash.keyring

Code:
ls -alhu /var/lib/ceph/crash
total 20K
drwxr-xr-x  5 ceph ceph 4.0K Sep  4 22:02 .
drwxr-x--- 14 ceph ceph 4.0K Sep  3 09:11 ..
drwx------  2 ceph ceph 4.0K Aug  7 19:07 2024-08-07T09:07:53.831553Z_2f7b806e-6d40-413c-8fc0-d89e71206551
drwx------  2 ceph ceph 4.0K Aug  7 19:08 2024-08-07T09:08:01.169014Z_17b51a51-5d78-4c3e-b983-97ab75ec0ac6
drwxr-xr-x  5 ceph ceph 4.0K Jun 19 15:26 posted

Code:
ls -alhu /var/lib/ceph/crash/posted
total 20K
drwxr-xr-x 5 ceph ceph 4.0K Sep  5 08:40 .
drwxr-xr-x 5 ceph ceph 4.0K Sep  4 22:02 ..
drwx------ 2 ceph ceph 4.0K Jul  5 09:08 2024-07-04T23:08:02.887774Z_ace1ad4e-2c81-4b87-81af-ea6302457785
drwx------ 2 ceph ceph 4.0K Aug  7 19:08 2024-08-07T09:08:00.936326Z_60a41562-026e-4b50-8c7f-b597e66f38a7
drwx------ 2 ceph ceph 4.0K Aug  7 19:08 2024-08-07T09:08:02.272533Z_ff19c064-30cf-4994-b32f-c73faea8ccbe

Code:
stat /etc/pve/priv/ceph.client.admin.keyring
  File: /etc/pve/priv/ceph.client.admin.keyring
  Size: 151             Blocks: 1          IO Block: 4096   regular file
Device: 0,47    Inode: 32          Links: 1
Access: (0600/-rw-------)  Uid: (    0/    root)   Gid: (   33/www-data)
Access: 2024-06-19 15:26:56.000000000 +1000
Modify: 2024-06-19 15:26:56.000000000 +1000
Change: 2024-06-19 15:26:56.000000000 +1000
 Birth: -

Thanks!
@Max Carrara I've just set up a single node with the latest available ISO, updated all to latest state (pve-manager/8.3.4/65224a0f9cd294a3 (running kernel: 6.8.12-8-pve)) and ran into the same issue with ceph reef. The outputs of all requested commands are the same with exception to ls -alhu /etc/pve/ceph which results empty in my case. Any ideas on how this happened?
 
Last edited:
@Max Carrara I've just set up a single node with the latest available ISO, updated all to latest state (pve-manager/8.3.4/65224a0f9cd294a3 (running kernel: 6.8.12-8-pve)) and ran into the same issue with ceph reef. The outputs of all requested commands are the same with exception to ls -alhu /etc/pve/ceph which results empty in my case. Any ideas on how this happened?

Hello! That's quite odd. Are you certain there's nothing in there and that you have the permissions to access that directory?

Does the node have a Ceph Monitor set up?
 
Hello! That's quite odd. Are you certain there's nothing in there and that you have the permissions to access that directory?

Does the node have a Ceph Monitor set up?
I am certain there was no file and I had access rights to the folder (browsed it as user root).

The node had a monitor set up, I'm unsure though if it was fully/properly created to begin with. As this was a single node, the Ceph setup "failed" in regards due to a lack of available disks. Initially I tried to set it up using only two dedicated disks (as a test). Even when the config was set to have the default osd count = 2, I was never able to write to the resulting Ceph storage. I thought that was strange, as it didn't report errors with the cluster, just a warning about too many pg's existing, but my time was fairly limited, so I couldn't investigate further.

What I can tell you though is that I was unable to setup a metadata server. It always timed out on creation of the fs. As I was unable to write to the RBD I suspect creating the MDS hit the same issue and simply got stuck.

For now I tore the Ceph setup down and will wait until our hoster adds the requested third disk, so unfortunately I can't provide you with further information.
 
Last edited: