command 'delete-drive-snapshot' failed - this feature or command is not currently supported

Yosu

New Member
Sep 28, 2016
5
0
1
49
On pve-manager/4.3-10/7230e60f, I can not delete a snapshot from a VM. It show this error:
VM 100 qmp command 'delete-drive-snapshot' failed - this feature or command is not currently supported

And later, I can not make new snapshots. It shows this error:
VM 100 qmp command 'savevm-start' failed - failed to open '/dev/rbd/mypool/vm-100-state-test2'

Deleting snapshots with CTs works ok.
 
Last edited:
you should not use KRBD when using ceph with qemu - qemu has built-in librbd support.
 
@fabian : For me, it's a bug. They are no technical reason why it shouldn't work.

We can create the snapshot with krbd, for delete we just need to check that krbd is enabled , and call ceph method to delete the snapshot and not qmp delete-drive-snapshot. This is how it's work for taking the snapshot.

for saving the memory, I need to check.
 
I don't know what it is using Proxmox. I only use the Proxmox web interface and that error happens.

KRBD is a configuration flag for the configured ceph storage, it tells PVE how to access your Ceph cluster
  • KRBD means using a kernel driver to expose rbd images as block devices, which has less features (because the kernel moves slower)
  • librbd is a user space library that allows accessing rbd images, it supports the current features
KRBD is needed for containers, because LXC does not know how to talk to Ceph directly. Usually people setup two pools, one for usage with KRBD/containers, one for usage with librbd/Qemu and map those to two storages in PVE with the appropriate options set.

Note that @spirit is correct that Qemu can also use KRBD (or rather, block devices created via KRBD), but this is not a recommended setup (and as you can see, has not really been tested much). @spirit proposed some patches on pve-devel that fix some of the more obvious issues, but it is possible that there are more. the recommended way to use Ceph with Qemu is (IMHO) via librbd..
 
Thanks for the explanation.

In other thread, other staff member says other thing:
https://forum.proxmox.com/threads/new-ceph-krbd-setting-on-pve-4.23836/#post-119610

I understand, but can I use storage for both KVM and LXC with KRBD enabled ?
yes.

My Proxmox setup came from version 3.4 with only one ceph storage for containers and virtual machines. No problem until now. I will try to move all my virtual machines disks to a new pool without KRBD.

KRBD is a configuration flag for the configured ceph storage, it tells PVE how to access your Ceph cluster
  • KRBD means using a kernel driver to expose rbd images as block devices, which has less features (because the kernel moves slower)
  • librbd is a user space library that allows accessing rbd images, it supports the current features
KRBD is needed for containers, because LXC does not know how to talk to Ceph directly. Usually people setup two pools, one for usage with KRBD/containers, one for usage with librbd/Qemu and map those to two storages in PVE with the appropriate options set.

Note that @spirit is correct that Qemu can also use KRBD (or rather, block devices created via KRBD), but this is not a recommended setup (and as you can see, has not really been tested much). @spirit proposed some patches on pve-devel that fix some of the more obvious issues, but it is possible that there are more. the recommended way to use Ceph with Qemu is (IMHO) via librbd..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!