Can no longer clone - Change to Ceph Keyring?

radawson

New Member
Jan 12, 2024
4
0
1
Please help: I can no longer clone a VM due to a keyring error, and I'm not sure what has gone wrong.

Has there been a change the location or method for the ceph client keyring storage?

Some time within the last month, during which I definitely did a few updates, I am no longer able to clone a VM because of a keyring error when accessing my Ceph volumes.

Specifically:
Code:
TASK ERROR: clone failed: mirroring error: VM 201 qmp command 'drive-mirror' failed - Could not open 'rbd:ptxpool01/vm-128-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/ptxpool01.keyring': No such file or directory

If I'm understanding this correctly, it appears that ptxpool01.keyring cannot be found. I have not (intentionally) changed any of my ceph config, which looks like this:

Code:
[global]
    auth_client_required = cephx
    auth_cluster_required = cephx
    auth_service_required = cephx
    cluster_network = 10.10.200.201/24
    fsid = ea5fc128-b7cc-4a88-8e7c-d73d7489f2e5
    mon_allow_pool_delete = true
    mon_host = 10.10.100.202 10.10.100.203 10.10.100.201
    ms_bind_ipv4 = true
    ms_bind_ipv6 = false
    osd_pool_default_min_size = 2
    osd_pool_default_size = 3
    public_network = 10.10.100.201/24

[client]
    keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
    keyring = /etc/pve/ceph/$cluster.$name.keyring

[mds]
    keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.hv01]
    host = hv01
    mds_standby_for_name = pve

[mds.hv02]
    host = hv02
    mds_standby_for_name = pve

[mds.hv03]
    host = hv03
    mds_standby_for_name = pve

[mon.hv01]
    public_addr = 10.10.100.201

[mon.hv02]
    public_addr = 10.10.100.202

[mon.hv03]
    public_addr = 10.10.100.203

I've looked through the forums and seen references that ceph can't read /etc/pve/priv/, but then how was this configured this way in the first place? I set up ceph through the GUI, and everything was working fine last month, including cloning.

The permissions of ptxpool01.keyring are:
-rw------- 1 root www-data 151 Jan 30 19:00 ptxpool01.keyring
which looks suspicious because www-data is listed as the group but not allowed any permissions, so what is the point of that?
Just to test, I tried changing permissions to 640, but even as root changing permission on that file is not permitted.

Bottom line, how do I fix my config so that I can clone a VM again? Move the keyring to /var/lib/ceph/mds/ ?

Also wanted to bring this up here in case anyone else was having this problem, or there was an undocumented ceph config change. I"m running ceph reef 18.2.2.
 
Last edited:
I am having the same issue. I am not sure if I ever attempted to clone a VM before but cloning from one cluster node to another with ceph does not work. I get the same error as well. From the error I am not sure if it is complaining about the keyring or the disk image of the vm it is attempting to clone. I will attach my error below. Thanks.

Code:
create full clone of drive scsi0 (pve-cluster-ceph-pool:vm-109-disk-0)
transferred 0.0 B of 120.0 GiB (0.00%)
qemu-img: Could not open 'zeroinit:rbd:pve-cluster-ceph-pool/vm-112-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/pve-cluster-ceph-pool.keyring': Could not open 'rbd:pve-cluster-ceph-pool/vm-112-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/pve-cluster-ceph-pool.keyring': No such file or directory
Removing image: 1% complete...
Removing image: 2% complete...
Removing image: 3% complete...
Removing image: 4% complete...
Removing image: 5% complete...
Removing image: 6% complete...
Removing image: 7% complete...
Removing image: 8% complete...
Removing image: 9% complete...
Removing image: 10% complete...
Removing image: 11% complete...
Removing image: 12% complete...
Removing image: 13% complete...
Removing image: 14% complete...
Removing image: 15% complete...
Removing image: 16% complete...
Removing image: 17% complete...
Removing image: 18% complete...
Removing image: 19% complete...
Removing image: 20% complete...
Removing image: 21% complete...
Removing image: 22% complete...
Removing image: 23% complete...
Removing image: 24% complete...
Removing image: 25% complete...
Removing image: 26% complete...
Removing image: 27% complete...
Removing image: 28% complete...
Removing image: 29% complete...
Removing image: 30% complete...
Removing image: 31% complete...
Removing image: 32% complete...
Removing image: 33% complete...
Removing image: 34% complete...
Removing image: 35% complete...
Removing image: 36% complete...
Removing image: 37% complete...
Removing image: 38% complete...
Removing image: 39% complete...
Removing image: 40% complete...
Removing image: 41% complete...
Removing image: 42% complete...
Removing image: 43% complete...
Removing image: 44% complete...
Removing image: 45% complete...
Removing image: 46% complete...
Removing image: 47% complete...
Removing image: 48% complete...
Removing image: 49% complete...
Removing image: 50% complete...
Removing image: 51% complete...
Removing image: 52% complete...
Removing image: 53% complete...
Removing image: 54% complete...
Removing image: 55% complete...
Removing image: 56% complete...
Removing image: 57% complete...
Removing image: 58% complete...
Removing image: 59% complete...
Removing image: 60% complete...
Removing image: 61% complete...
Removing image: 62% complete...
Removing image: 63% complete...
Removing image: 64% complete...
Removing image: 65% complete...
Removing image: 66% complete...
Removing image: 67% complete...
Removing image: 68% complete...
Removing image: 69% complete...
Removing image: 70% complete...
Removing image: 71% complete...
Removing image: 72% complete...
Removing image: 73% complete...
Removing image: 74% complete...
Removing image: 75% complete...
Removing image: 76% complete...
Removing image: 77% complete...
Removing image: 78% complete...
Removing image: 79% complete...
Removing image: 80% complete...
Removing image: 81% complete...
Removing image: 82% complete...
Removing image: 83% complete...
Removing image: 84% complete...
Removing image: 85% complete...
Removing image: 86% complete...
Removing image: 87% complete...
Removing image: 88% complete...
Removing image: 89% complete...
Removing image: 90% complete...
Removing image: 91% complete...
Removing image: 92% complete...
Removing image: 93% complete...
Removing image: 94% complete...
Removing image: 95% complete...
Removing image: 96% complete...
Removing image: 97% complete...
Removing image: 98% complete...
Removing image: 99% complete...
Removing image: 100% complete...done.
TASK ERROR: clone failed: copy failed: command '/usr/bin/qemu-img convert -p -n -f raw -O raw 'rbd:pve-cluster-ceph-pool/vm-109-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/pve-cluster-ceph-pool.keyring' 'zeroinit:rbd:pve-cluster-ceph-pool/vm-112-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/pve-cluster-ceph-pool.keyring'' failed: exit code 1
 
Last edited:
Just wanted to post that it has been fixed. After I updated the pve-qemu-kvm package, it seems to have been fixed. I have been able to clone VMs again.
 
  • Like
Reactions: fweber
Please help: I can no longer clone a VM due to a keyring error, and I'm not sure what has gone wrong.

Has there been a change the location or method for the ceph client keyring storage?
Hi, do you still see these problems when cloning a VM with an RBD disk? If yes, could you please post the output of the following commands (please fill in VMID accordingly):
Code:
pveversion -v
qm config VMID --current
cat /etc/pve/storage.cfg
ls -l /etc/ceph/ceph.conf
ls -l /etc/pve/ceph.conf
ls -l /etc/pve/priv/ceph/ptxpool01.keyring
rbd --id admin --keyring /etc/pve/priv/ceph/ptxpool01.keyring -p ptxpool01 ls
Proxmox VE 8.2 came with some changes [1] that add a client.crash keyring to the Ceph config (which apparently worked, as your Ceph config has a [client.crash] section), but currently I don't think they can cause issues like the one you're seeing.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4759