pve 3.4 could not open rbd... operation not supported

Aug 6, 2014
136
3
18
we have a real cluster (3.4 supported version with kernel 3.10) and ubuntu 14.04 / ceph giant where this stuff works.

test "cluster" is single proxmox 3.4 (kernel 3.10) node connected to 3 node ceph cluster, all are vmware fusion virtual machines.
guests on them run fine with local storage, but dont run on ceph storage.
the proxmox node is using the proxmox no-subscription repository for its ceph client (ceph 0.80.0).
the test ceph cluster is running ubuntu 14.04 lts and ceph giant (0.87.2)

the test prox node can see the ceph cluster, its free space, make a new filesystem on it, even clone a file system onto it,
but doesnt seem to be able to mount it to kvm. "Could not open 'rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41': Operation not supported"

i copied the cephx key monhost: /etc/ceph/ceph.client.admin.keyring to /etc/pve/priv/ceph/rbd.keyring and tried both the web gui and pvesh
to add ceph storage. in both cases everything seemed fine as above, but same could not run kvm. running the vm from local storage works fine.

heres the full error

Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuServer.pm line 2907.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/QemuServer.pm line 2909.
kvm: -drive file=rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41,192.168.113.42,192.168.113.43:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi1,aio=native,cache=none,detect-zeroes=on: could not open disk image rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41: Could not open 'rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41': Operation not supported
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=b2d545a6-3459-424b-8d93-7e5c722121d1' -name cephclone -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -m 128 -k en-us -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:11dc9937673f' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41,192.168.113.42,192.168.113.43:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi1,aio=native,cache=none,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -drive 'file=rbd:rbd/vm-101-disk-2:mon_host=192.168.113.41,192.168.113.42,192.168.113.43:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi0,aio=native,cache=none,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=101'' failed: exit code 1Could not open 'rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41': Operation not supported
 
we have a real cluster (3.4 supported version with kernel 3.10) and ubuntu 14.04 / ceph giant where this stuff works.

test "cluster" is single proxmox 3.4 (kernel 3.10) node connected to 3 node ceph cluster, all are vmware fusion virtual machines.
guests on them run fine with local storage, but dont run on ceph storage.
the proxmox node is using the proxmox no-subscription repository for its ceph client (ceph 0.80.0).
the test ceph cluster is running ubuntu 14.04 lts and ceph giant (0.87.2)

the test prox node can see the ceph cluster, its free space, make a new filesystem on it, even clone a file system onto it,
but doesnt seem to be able to mount it to kvm. "Could not open 'rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41': Operation not supported"

i copied the cephx key monhost: /etc/ceph/ceph.client.admin.keyring to /etc/pve/priv/ceph/rbd.keyring and tried both the web gui and pvesh
to add ceph storage. in both cases everything seemed fine as above, but same could not run kvm. running the vm from local storage works fine.

heres the full error

Use of uninitialized value in string ne at /usr/share/perl5/PVE/QemuServer.pm line 2907.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/QemuServer.pm line 2909.
kvm: -drive file=rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41,192.168.113.42,192.168.113.43:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi1,aio=native,cache=none,detect-zeroes=on: could not open disk image rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41: Could not open 'rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41': Operation not supported
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=b2d545a6-3459-424b-8d93-7e5c722121d1' -name cephclone -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -m 128 -k en-us -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:11dc9937673f' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41,192.168.113.42,192.168.113.43:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi1,aio=native,cache=none,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -drive 'file=rbd:rbd/vm-101-disk-2:mon_host=192.168.113.41,192.168.113.42,192.168.113.43:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/rbd.keyring,if=none,id=drive-scsi0,aio=native,cache=none,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=101'' failed: exit code 1Could not open 'rbd:rbd/vm-101-disk-1:mon_host=192.168.113.41': Operation not supported

Hi,
sounds that someting wrong with the permissions?!

Can you post the output of the following commands
Code:
rados -p rbd ls

grep -A 5 rbd /etc/pve/storage.cfg

cat /etc/pve/priv/ceph/*.keyring
Udo
 
root@proxbox:~# rados -p rbd ls
2015-07-30 12:32:43.067025 7f298e15e760 -1 did not load config file, using default settings.
no monitors specified to connect to.
couldn't connect to cluster! error -2

root@proxbox:~# grep -A 5 rbd /etc/pve/storage.cfg
rbd: rbd
monhost 192.168.113.41,192.168.113.42,192.168.113.43
pool rbd
content images
username admin

dir: local
path /var/lib/vz

root@proxbox:~# cat /etc/pve/priv/ceph/*.keyring
[client.admin]
key = AQDvi7lVQOdgIhAAj49y5SfmWeo8Pbh+VKFZdA==
 
root@proxbox:~# rados -p rbd ls
2015-07-30 12:32:43.067025 7f298e15e760 -1 did not load config file, using default settings.
no monitors specified to connect to.
couldn't connect to cluster! error -2

root@proxbox:~# grep -A 5 rbd /etc/pve/storage.cfg
rbd: rbd
monhost 192.168.113.41,192.168.113.42,192.168.113.43
pool rbd
content images
username admin

dir: local
path /var/lib/vz

root@proxbox:~# cat /etc/pve/priv/ceph/*.keyring
[client.admin]
key = AQDvi7lVQOdgIhAAj49y5SfmWeo8Pbh+VKFZdA==
Hi,
the monhosts are seperate with space in my config(like "monhost 192.168.113.41 192.168.113.42 192.168.113.43"), but I guess that's not the issue.

Does the mon is running on 192.168.113.41? MTU is right on both sides? auth matched? content is visible on the mon-node?
commands on the monhost:
Code:
netstat -an | grep 6789 | grep -i listen

ip addr | grep -A 2 -B 2 192.168.113.41

ceph auth list | grep -A 6 client.admin

rados -p rbd ls

Your osd public_network is 192.168.113.0/24? And your pve-host is also in the same network?
on your pve-node:
Code:
ip addr | grep -A 2 -B 2 192.168.113
Udo
 
mtu is 1500 on both, and yes, same network. its an inhost vmware network on my laptop.
but i tried changing the monhost like to space like yours and it worked :D

the cluster at work has ';' between the monhosts, so i guess ; and space work, but not ','

root@c1:~# netstat -an | grep 6789 | grep -i listen
tcp 0 0 192.168.113.41:6789 0.0.0.0:* LISTEN
root@c1:~# ip addr | grep -A 2 -B 2 192.168.113.41
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:25:08:e4 brd ff:ff:ff:ff:ff:ff
inet 192.168.113.41/24 brd 192.168.113.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe25:8e4/64 scope link
root@c1:~# ceph auth list | grep -A 6 client.admin
installed auth entries:

client.admin
key: AQDvi7lVQOdgIhAAj49y5SfmWeo8Pbh+VKFZdA==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQDwi7lVKDuKEhAASvboiIwNvTrmUq8nziTKsg==

root@c1:~# rados -p rbd ls
rbd_data.11e62ae8944a.0000000000000360
rbd_data.11e62ae8944a.0000000000000045
rbd_data.11e62ae8944a.00000000000000cc
...
rbd_id.vm-101-disk-1
foo.rbd
rbd_id.vm-101-disk-2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!