[SOLVED] ceph rbd error: rbd: list: (95) Operation not supported (500)

RobFantini

Famous Member
May 24, 2012
2,023
106
133
Boston,Mass
Hello

added ceph storage:
Code:
rbd: kvm-ceph
       monhost 10.11.12.3;10.11.12.5;10.11.12.8
       content images
       krbd 0
       pool kvm-ceph
       username admin

when i click on the storage or try to use from pve [ restore a backup or move a disk] i get:
Code:
# click on storage
rbd error: rbd: list: (95) Operation not supported (500)

#move disk
create full clone of drive scsi0 (kvm-zfs:vm-106-disk-1)
TASK ERROR: storage migration failed: rbd error: rbd: list: (95) Operation not supported

i tried mons with and with out ';' per another thread.

the keyrings are ok. this shows no diff:
Code:
cmp ceph.client.admin.keyring ceph/kvm-ceph.keyring

I'm sure I've overlooked something basic.
any clues to fix this?
 
Code:
# pveversion -v
proxmox-ve: 5.0-21 (running kernel: 4.10.17-3-pve)
pve-manager: 5.0-31 (running version: 5.0-31/27769b1f)
pve-kernel-4.10.17-2-pve: 4.10.17-20
pve-kernel-4.10.17-3-pve: 4.10.17-21
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-14
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-5
pve-container: 2.0-15
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.11-pve17~bpo90
ceph: 12.2.0-pve1

i removed pve ceph storage, as other parts of pve are becoming unstable. for instance click node > summary has long delay showing info.
 
ceph.conf :
Code:
[global]
         auth client required = none
         auth cluster required = none
         auth service required = none
         cluster network = 10.11.12.0/24
         fsid = 220b9a53-4556-48e3-a73c-28deff665e45
         keyring = /etc/pve/priv/$cluster.$name.keyring
         mon allow pool delete = true
         osd journal size = 5120
         osd pool default min size = 2
         osd pool default size = 3
         public network = 10.11.12.0/24

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.sys5]
         host = sys5
         mon addr = 10.11.12.5:6789

[mon.sys3]
         host = sys3
         mon addr = 10.11.12.3:6789

[mon.sys8]
         host = sys8
         mon addr = 10.11.12.8:6789

ceph -s
Code:
  cluster:
    id:     220b9a53-4556-48e3-a73c-28deff665e45
    health: HEALTH_OK
  services:
    mon: 3 daemons, quorum sys3,sys5,sys8
    mgr: sys3(active), standbys: sys8, sys5
    osd: 24 osds: 24 up, 24 in
  data:
    pools:   2 pools, 1024 pgs
    objects: 0 objects, 0 bytes
    usage:   25575 MB used, 10703 GB / 10728 GB avail
    pgs:     1024 active+clean

maybe the issue has to do with disabling cephx . I'll recheck that setup.
 
the problem was caused by

1- having cephx disabled . I did that during ceph initialization .
2- at the same time having storage keys at /etc/pve/priv/ceph/

removing the storage keys at /etc/pve/priv/ceph/ fixed the issue.
 
  • Like
Reactions: albert_a
have the same problem but i don't have storage keys at /etc/pve/priv/ceph/. i dont't have folder at /etc/pve/priv/ceph/ only /etc/pve/priv/lock. i dont't disable cephx. if i disable problem will fix?
 
Last edited:
/etc/pve/priv/ceph/
You need to copy the keyring file of the ceph user you connect to the ceph cluster to that location (name: <storage>.keyring), so PVE knows it has to use cephx and that key.
 
So I had disabled cephx and then enabled it again. But still got the error (maybe pvestatd should check if cephx is enabled again).

My solution for the "pvestatd: rados_connect failed - Operation not supported" was therefore the below

Code:
cd /etc/pve/priv/ceph/old/
mv * ../

Now my proxmox GUI doesn't display error marks on the ceph RBD! :D
 
  • Like
Reactions: Dexogen
I just ran into this error again when creating a new RDB pool.

I feel like Proxmox and Ceph with cephx disabled is not playing well. I hope more attention is devoted to make sure all components work well with cephx disabled.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!